Opened 9 years ago

Closed 8 years ago

#289 closed defect (fixed)

2DSA memory display

Reported by: demeler Owned by: gegorbet
Priority: normal Milestone: future
Component: ultrascan3 Version:
Keywords: review Cc:

Description

I think there is an error with the memory displayed in the 2DSA fitcontrol widget - I think the memory needs to be multiplied by the number of threads that are set. For example, when running with 4 threads I get 404m under Virt and 309m under Res, while the memory field suggests 81 MB. When I switch to 1 thread, this counter changes only to 80 and Virt is set to 112m and RED to 44m.

Change History (5)

comment:1 Changed 9 years ago by gegorbet

  • Owner changed from gorbet to gegorbet

There are two memory values displayed by 2dsa: estimated and actual maximum used. The estimated value does need some re-working, although I've found it hard to predict. The more important value is the actual maximum used, displayed at the end of each iteration.

I did a test with threads=3 and threads=1. For 3, a "top" showed RES whose maximum I was able to catch was 272. The maximum displayed by 2dsa in that case was 289. For 1, the values were 236 and 243.

So, I think the actual maximum memory used is accurate, regardless of number of threads.

comment:2 follow-up: Changed 9 years ago by demeler

My comment was only about the estimated memory displayed in the gui before the fit.

I believe we record this value along with other parameters in the database and could write a data mining application to fit the memory usage as function of the parameter profile, and use that to predict the estimate value. This may vary dependent on architecture, but probably not by much.

One variable that is tough to predict is how many solutes stay in memory and how many grid stages are necessary, as they will also contribute to the memory.

The use of this variable is really to provide a ballpark telling the user how few grids he can get away with given his computer's configuration. Maybe we should get a grad student from UTSA to do a systematic performance analysis for that.

comment:3 in reply to: ↑ 2 Changed 9 years ago by gegorbet

Replying to demeler:

My comment was only about the estimated memory displayed in the gui before the fit.

OK. I agree that getting a reasonable estimation is important, for the reasons you mention below.

I believe we record this value along with other parameters in the database and could write a data mining application to fit the memory usage as function of the parameter profile, and use that to predict the estimate value. This may vary dependent on architecture, but probably not by much.

I am not aware of recording the estimation in the DB. If we record something, I think it should be the actual maximum memory used.

One variable that is tough to predict is how many solutes stay in memory and how many grid stages are necessary, as they will also contribute to the memory.

The use of this variable is really to provide a ballpark telling the user how few grids he can get away with given his computer's configuration. Maybe we should get a grad student from UTSA to do a systematic performance analysis for that.

As you say, it is tough to predict how much memory will be used for a given number of threads and subgrid sizes. Having a grad student systematically analyze the actual memory used for different parameterizations might enable us to create a database that we could then reference at estimation time. As you mentioned, that estimation is important to allow the user to play with parameters and insure that the run will work reasonably well with the actual system that she is using.

comment:4 Changed 8 years ago by gegorbet

  • Keywords review added

I reworked the memory estimation code in 2DSA. It now gives a much more accurate figure.

Review-ready.

comment:5 Changed 8 years ago by dzollars

  • Resolution set to fixed
  • Status changed from new to closed
Note: See TracTickets for help on using tickets.