| <p>Attendance: Julie, Mitchell, Ryan, Alex P., Leo, Alex M.</p>
<h2><u>Today's Tasks</u></h2>
<p><strong>Mitchell: </strong>Worked on Python Project and helped with Ryan's final bash project questions</p>
<p><strong>Ryan:</strong> Finished the bash project, and is about to start the Python project. </p>
<p><strong>Leo:</strong> Worked on lectures, which will restart on March 17th.</p>
<p><strong>Julie, Alex P, Alex M:</strong> Worked on fixing the job XF submission scripts. Worked on making it so that it saves state after each individual's XF simulation is complete. We tried to parallelize as many of the XF scripts as possible, but if we have more than one it asks us to choose a license for each individual -- ie requires user input each time. Since we haven't found a way to get around it AND since running with 1GPU is super fast (under 5 min per individual), we are going to run them in series.</p>
<p> </p>
<h2><strong><u>TO-DO List</u></strong></h2>
<ol>
<li><u>Scaling test run:</u> Do a test run to see if the scaling (when we also scale the grid spacing) actually affects the run time for XF. (PASSED THIS TASK TO JULIE. WILL DO TOMORROW 3/4)</li>
<li><u>Evolve</u>: Right now, we are using an interactive job with only a CPU running at half scale, and have just edited our software so that we submit jobs for the XF simulations requesting 1GPU (could be 2GPU if we choose to edit it so).
<ol>
<li>WHY WE ARE DOING THIS:
<ol>
<li>Remember that we only need an interactive job when XF initializes and reads in the DNA for each individual. The GUI closes once the simulations start and we no longer need an interactive job. </li>
<li>We were requesting a 1GPU interactive job and just running XF from the command line; however, OSC has been SUPER busy lately and we haven't been getting them (especially since we need long wall times if we are running the full simulations over many generations within the interactive job). However, we realized that if we submit an interactive job at a CPU (so that XF can allow the GUI to pop up), and then run the simulations by requesting 1-2GPUs we can then request a much smaller wall time and it will be quicker for the simulations to run --- ie we would be requesting a wall time of about 10 minutes using 1-2GPUs per simulation for each job submission versus 5+ hour wall time of an interactive job using 1-2GPUs.</li>
<li><strong>WE HAVE TESTED THIS, AND WE HAVE ERRORS. WE WORKED ON FIXING THEM TODAY, AND HOPE TO FINALIZE THEM TOMORROW (3/4) AND RUN!</strong></li>
</ol>
</li>
</ol>
</li>
<li>
<p dir="ltr"><u>Parallelize the AraActualBicone job.</u></p>
<ol>
<li>
<p dir="ltr">The AraSim run in Gen 0 that gets Veff for the ARA actual bicone input file is not currently being parallelized. (ALEX M)</p>
</li>
<li>
<p dir="ltr">As of 3/3, I have not gotten an update on this. I will check on the status of this tomorrow. </p>
</li>
</ol>
</li>
<li>
<p dir="ltr"><u>Comment code thoroughly (Everyone needs to help with this)</u></p>
<ol>
<li>
<p dir="ltr"><u>​</u>Julie made some progress on this over the weekend. </p>
<ol>
<li>
<p dir="ltr">Made comments on what each bash script version does, and cleaned up our directories from old code so that it doesn't confuse the others when they start helping </p>
</li>
</ol>
</li>
<li>
<p dir="ltr">WAY more needs to still be done. </p>
</li>
</ol>
</li>
<li>
<p dir="ltr"><u>Get new proposal results for March 12th deadline.</u></p>
</li>
</ol> |