Thanks for clarifying the performance test setup/scenario.
The difference that you see in the response time could be the fact that the virtual service only records how long it takes for it to respond, whereas the average response time in the staged test run result factors in the entire RTT (round trip time), meaning it records from the time the request is sent plus the time for the service to respond plus the actual time for it to be sent back over the wire. So you're comparing two different type of metrics.
So are you just concerned about the discrepancies of the results from the staged test run results and the VSE database metrics table? Or are you not able to get the level of detail of metrics from the VSE database?
I'm not sure if there's a way to lower the value of the lisa.vse.metrics.sample.interval property. It might be this way by design as anything lower than 1sec may overload the database connection.
If you really need a workaround for this, then one suggestion would be to customize the VSM to calculate the information that you're looking for. For example, if you want to collect the response time then you can add a Timestamp filter on the Listen step and on the Responder step and then add a separate scripting step after the Responder step to calculate the response time using the two timestamp values from the listen and responder step.
I'm attaching a sample project with the above example implementation for your reference.
Timestamp filter on Listen step:
Timestamp filter on Responder step:
Scripting step after the Responder step: