CITA |
Gravitational Lensing |
NASA Images |
Search Archieves | My Research | My Photos | Contact Info | Home |
Liuxuan's Homepage Last Update: 30 June 2004 |
Progress log | Code structure | Current problems | Past problems | Plots | Notes | Presentations
and link to MHD related papers
For the most current and detailed update, please see section 'Current problems'
August 2005 August 18: energy plot for a bunch of runs 1. energy_Z4xxxxxx.eps Plot valid before the kinks. Z4012800 (green), Z4012805 (lower black), Z4012810 (higher black), Z4006405 (blue). August 15: velocity power spectrum paral. and perp. 1. Test_psV.eps paral. (solid), perp. (short dash), v_x (long dash); ndv=32,kpk=4,n=32: July 2004subsonic hydrodynamic turbulence (just before crashing) 2. Z40008xx.eps paral. (solid), perp. (short dash), v_x (long dash); ndv=32,kpk=4,n=32:Z4000800 |Z4000805 |Z4000810 3. Z4xxxx05.eps paral. (solid), perp. (short dash), v_x (long dash); ndv=32,kpk=4,lambda=[8,16]:Z4000805 |Z4001605 June 2005 June 15: energy evolution plots (1st digit = L/lambda; 2nd-5th digits = lambda/dx; last 2 digits \sim v_A/c_s; both E_B and \Delta E_B are shown with same ltype) 1. SOG series -S8006400 (Mach=3.42) |S8006401 (Mach=3.07) |S8006404 (Mach=3.03) |S8006414 (Mach=3.03) energy evolution 2. ZMP series -Z4012800 (Mach=1.27) |Z4012804 (Mach=1.33) |Z4012832 (Mach=1.82) energy evolution 3. ZMP series -Z4012805 (Mach='5') |Z4012807 (Mach='7') |Z4012810 (Mach='10') energy evolution January/February 2004 Jan 5 : roughly done 'Riemann shock tube' test - serial + MPI Jan 5 : roughly done 'Sound wave' test - serial Jan 10: coded 'Sound wave' test - MPI Feb 20: coded 'perturb' - MPI May 2004 May: made use of subroutines energy, momentum and output (can be used as checkpoint) May: implemented coherence time in perturb May: debugged perturb June 2004 June: large run June: preliminary analysis of large run outputs, such as fluid power spectrum (PS), dv_PS, energy, input_energy June: preliminary, serial b-boundary condition (b-BC) implementationJuly: added readfile3d subroutine to restore checkpoints and to look at turbulence decay 1. fine-tuned large runs: set parameters 2. comprehensive measurement of power spectrum and energy evolution for large runs 3. debug and parallelize b-BC solver 4. draft turbulence paper5. develope u-BC solver At some point1. 'sound wave' test - MPI 2. 'alfven wave' test - MPI 3. magnetic helicity conservation Caption for section 'Progress log': - textpurple - completed black - currently in progressblue - future planning
Main program:mhd Basic advection subroutines (now isothermal):29. MPI initialization 1. advectbyzx(u,b,nx,ny,nz,dt) 2. calcfl(u,b,nx,ny,nz,cfl) 3. eos(d,y,e,p,cs,status) - not used 4. fluidx.f90(u,b,phi,nx,ny,nz,dt) 5. mhdflux(frp,flm,u,b) 6. mhdupdate(u,u1,flux1,flux0,phi2,phi0,dt) 7. sweep(forward,u,b,phi,nx,ny,nz,dt) 8. transpose.f90: transposef/transposeb(u,b,phi,nx,ny,nz) 9. tvd1(u,b,phi,n,dt) 10. tvdb(flux,b,vg.n.dt)!----------------------------------------------------------------------- Frequently modified basic subroutines: 11. init(u,b,nx,ny,nz) 12. output(num,time,u,b,nx,ny,nz)[I/O] !----------------------------------------------------------------------- Advection flowchart:mhd 29 11 2 7 8 12 4 1 9 10 5 6!----------------------------------------------------------------------- !----------------------------------------------------------------------- Wave testing subroutines - added: 13. analytic(num,time,nx,ny,nz,ua) 14. error(erho,evx,nx,ny,nz,u,ua,ncr)!----------------------------------------------------------------------- General purpose subroutines - added: (not shown in flowchart) 15. energy(u,b,nx,ny,nz,ek,eb,eth) 16. momentum(rho,v,nx,ny,nz,p) 17.checkpoint - done in output31. outputenergy(iter,time,ek,eb,eth)34. output3d(u,b,nx,ny,nz)[I/O] 28. readfile3d(u,b,nx,ny,nz)[I/O] !----------------------------------------------------------------------- !----------------------------------------------------------------------- Turbulence driving subroutines:-Turbulence testing 18 .check.f90 : checkdiv(dv), checkpowerspectrum(dv,ps0,l)[I/O] ; pps_3d.f90(dv(c,:,:,:),ps3d(c,:,:,:),ndv); c3d_to_c1d.f90(in3d,l,out1d)33. outputdv(time,dv)-Turbulence generating 19. perturb(countdv,iter,time,nx,ny,nz,u,dele,delt,dv)[use ALLOCATE] 20.gaussian (countdv,dv) 21.convolve(l,ps0,dv) 22. curl(dv) - OK32. mixdv(dv,dvnew,delt) 23. minus_momentum(u(1),dv,nx,ny,nz) - OK 24. normalize_energy(u,dv,nx,ny,nz,de) - OK 25. addv(u,dv,nx,ny,nz) 26. c1d_to_c3d(ps3d(out),l,ps1d(in)) 27. pconv_filter(ps3d,dv(c,:,:,:),ndv)30. fft_3d(ca,fa,n,direction) Turbulence driving subroutines flowchart: 19 20 21 2218 32 23 24 25 26 2730 !----------------------------------------------------------------------- Subroutines that are directly made into object files: 1.comm_module.f 2. parameter_module.f 3. prof_module.f[I/O] 4. spherical_module.f 5. trapifc.c 6. poisson.f 7. ipp.c 8. ippfft.f90!----------------------------------------------------------------------- Caption: - textred - to be completedpurple - to be made used oforange - being debuggedblue - rarely touched but has been changed - flowchart (number)blue - important/mainpurple - temporary testing toolsorange - indexed out of order Changes: 1. disabled profiling within sweep, eg. work on central parts 2. enter and exit are verbo for --perturb --gaussian --convolve --check-div --check-powerspectrum --mixdv --minus_momentum --normalize_energy --addv To plot: 1. move from BOB to local 2. sm on local To backup: 1. tar and scp Output -- local to BOB; remove Output.tgz on both sides 2. tar directory SOG to Bob.tgz on McKenzie 3. scp Bob.tgz -- BOB to local 4. unzip Bob.tgz to CurrentVersion on local 5. date the tar file Bob_month_year.tgz and put in OldVersion on local; To compile, (debug,) and execute: 0. use intel-8.0 compiler 1. Run with qsub (on bob): make clean; make; (qsub mhd.batch) / (make run) 2. Run with debuger (on devel1-8): lamboot -b hosts.dev make clean; make; gdb ./mhd; r 3. Run without debuger (on devel1-8): lamboot -b hosts.dev make clean; make; (time) mpirun C ./mhd 4. debug with MPI mpirun h -c [number of processes] xterm -sb -sl 1000 -e gdb/idb mhd Compiler options (in order of importance to smooth performance): 0. library: -I/opt/lam-6.6b1/include/ -L/opt/lam-6.6b1/lib/ floating point handling: -xW 1. optimization: -fpp2 -tpp7 -O3 (turn off -O3 when debugging) 2. debugging: -C -DDEBUG -g 3. floating point handling: fpe1 Number of time steps required on a single cpu: 1. n=16, about 40 2. n=32, about 60 3. n=64, greater than 206 (still stable) 4. n=128 (stable)
Waiting for confirmation: 1. for ndv=nxg (see plot 9) tolerable difference between prescribed and measured power spectrum? and between lines of different colours? 2. for ndv.ne.nxg (see plot 10) tolerable difference between prescribed and measured power spectrum? and between lines of different colours? 3. ndv.ne.nxg using intel-7.0 compiler (see plot 11) 4. What is a acceptable coarse grid resolution? (see Table 1) 5. What is a good ratio of ndv to kpk? (How narrow should the driving be?) 6. How should the injected energy scale with box size? 7. first use a fixed coherence time - what is a reasonable one to use? - L/v Questions for myself: 1. What are Q and D (like E) for floating point number? 2. check that MPI_ALLREDUCE indeed takes single precision real arguments (eg. from manuals) 3. why is the first perturbation so small before minus_momentum (original total dv momentum) -- random seed? To be resolved: [June 27-July 3] 1. put in checkpoint (ie. output every 500 iterations) - for time limit and security 2. set parameters for turbulence simulation - set t_coh = eddy turn-over time = L_pk/v_rms -- v_rms from avg(ek) - tune rdvnew (normalize energy) -- check effect -- avg(de) should = dele 3. measure power spectrum - compile with 'f90 -T 1000000000 -D 1020000000' - use common blocks with the 'common' command - declare all large arrays as global variables - to be analyzed: -- ncpu=1024,nxg=32, iterf=100,500 -- ncpu=256, nxg=32, iterf=100 -- ncpu=128, nxg=32, iterf=100 4. plot energy - observe time of saturation plot input_energy - note fluctuation 5. b-boundary condition 6. increase speed/ memory efficiency - no need to generate 3d power spectrum for dv every time 7. draft turbulence paper 8. develope u-BC solver =============================================================== N-7.complete the following comparison table From now on, use power spectrum of the form ps2v=k**6 * exp(-(k/kpk)**2) until specified otherwise a. for (ndv=16,nxg=32) try running with more CPUs (up to 16) b. for (ndv=16,nxg=32) compare time, power spectrum and energy evolution for different ncpu
ncpu=1 | ncpu=2 | |
---|---|---|
nxg=32,(ndv=16,kpk=0.25) | 25m44.445s, PS, energy | 17m17.709s, PS, energy |
nxg=32,(ndv=32,kpk=0.5) | ?m?s, PS, energy | |
nxg=32,(ndv=64,kpk=1) | ?m?s, PS, energy | |
nxg=32,(ndv=128,kpk=2) | ?m?s, PS, energy |
Turbulence Paper draft paper Movies 1. driving scheme B, t_coh/t_eddy=1, beta=1, 256^3, timef/ts=0.95 (inital B-field is in the x-direction, background=density) mid xy-plane|mid yz-plane Energy and spectrum for N=512 and 1024, Ndv=32: 1. N=512, iterf=100, tcoh=0.01 energy evolution | input energy | rho power spectrum 2a.N=1024, iterf=100, tcoh=0.01 rho power spectrum 2b.N=1024, iterf=500, tcoh=0.01 energy evolution | input energy | rho power spectrum 2c.N=1024, iterf=500, tcoh=210 energy evolution | input energy | rho power spectrum 2d.N=1024, iterf=500, tcoh=10 energy evolution | input energy | rho power spectrum II(303).N=512, iterf=250, tcoh=7E-3, beta=1, SOG energy evolution | input energy | rho power spectrum | p power spectrum MPI (using smooth minmod): -Wave Tests 1. Riemann shock tube (N=16, 0.7*cfl) - rho | vx 2. Sound waves (N=16, 0.7*cfl) - rho | vx 3. Sound waves (N=16, 0.6*cfl) - rho | vx -TYPE 2 forcing: \prop (exp(-(k/kpk)^2)) 4. 1D power spectrum (MPI) - N=ndv=16 -- using intel-8.0 compiler 5. 1D power spectrum (MPI) - N=128, mod in curl.f90, ps2v=k**6 * exp(-(k/kpk)**2) 6. 1D power spectrum (MPI) - N=128, modulo in curl.f90, ps2v=k**6 * exp(-(k/kpk)**2) 7. 1D power spectrum (MPI) - N=128, mod in curl.f90, ps2v=k**6 * exp(-4.03332*(k/kpk)**2) 8. 1D power spectrum (MPI) - N=128, modulo in curl.f90, ps2v=k**6 * exp(-4.03332*(k/kpk)**2) 9. 1D power spectrum (MPI) - N=128, component explicit in curl.f90, ps2v=k**6 * exp(-4.03332*(k/kpk)**2) 10. 1D power spectrum (MPI) - N=64, ndv=128, component explicit in curl.f90, ps2v=k**6 * exp(-4.03332*(k/kpk)**2) 11. 1D power spectrum (MPI) - intel-7.0 compiler, N=32, ndv=128, component explicit in curl.f90, ps2v=k**6 * exp(-4.03332*(k/kpk)**2) Serial (using Vanleer): 12. 1D power spectrum (serial) - N=128 [May 30-June 5] -TYPE 2 forcing: ps2v = k**6 * exp(-(k/kpk)^2) 13. 1D power spectrum (serial) - kpk=0.25, ndv=128 14. 1D power spectrum (serial) - kpk=4.0, ndv=128 15. time correlation of dv (MPI) - tcoh=5, kpk=4 16. time correlation of dv (MPI) - tcoh=10, kpk=4 17. time correlation of dv (MPI) - tcoh=100, kpk=4
1. Domain decomposition: in order z-y-x, 3 buffer zones on each side 2. Isothermal (MPI): cfl <= 0.7
Bologna_2005_poster.pdf
KAW2_2004_MHD_poster.ppt
CUPC_2003_talk.ppt
CITA |
Gravitational Lensing |
NASA Images |
Search Archieves | My Research | My Photos | Contact Info | Home |
Liuxuan's Homepage © 2002 by Lucy Liuxuan Zhang Last Update: 30 May 2004 |