-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Resources for Simulation Parameters of YBCO #51
Comments
Since TDGL is well defined near Tc is it best practice to use the temperature dependent parameters for coherence length and penetration depth? I have noticed that most papers just use the value near 77K even if Tc is roughly 90k. Oh and since 81.6K resulted in a \tau_{in} = 10 ps I would expect if we extrapolate to 90K gamma should tend towards 0. It is difficult to know whether to use values near Tc or just the temperature in the real experiment. |
I think you should use the temperature-dependent value of all parameters, evaluated at the temperature of the experiment. In some formulations of TDGL, you input the T=0 values of the material parameters and the ratio t=T/T_c. The functional form of the temperature dependence \xi(t), \lambda(t) is then included in the TDGL equations. However in pyTDGL, you should specify the value of the parameters that you think the material has at the temperature of the experiment. There is the question of how close to T_c is close enough for GL theory to be valid. I don't really know the answer - people often apply GL theory even far below T_c for lack of a better model. However, I have seen papers claiming at the "GL region" is T >= 0.85 * T_c (for example, Fig. 16 of this paper https://arxiv.org/abs/2205.15000). For optimally doped YBCO at LN2 temperature you have 77 K / 93 K = 0.83, so it is pretty close. |
Anyway to get around the increase in computation time when gamma >> 10? Simulation times went from roughly 12 hours with gamma=1 to 80 hours with gamma=100. Maybe using the GPU versions of the sparse_solver might be beneficial here? I will try it soon. |
Large gamma requires the solver to choose a very small time step, which is why the simulation time increases so much. I am not sure if there is a way around it - I will have to think about it. In the meantime, your rationale for using gamma = 10 makes sense to me. The very large gap of YBCO makes things difficult... Using |
I'm not sure how it is done in the backend, but could you use an adaptive mesh proportional to the Laplacian at mesh sites? You would still have a maximum global edge length to maintain a somewhat accurate simulation before the introduction of any fluctuations to the order parameter. I am not sure how computationally expensive it would be to adjust the mesh at each time step, but perhaps the mesh generation could be done on the GPU as well? I have no idea how difficult this kind of implementation would be. |
I have been trying to simulate the nucleation of vortices similar to how it is shown in MOI by my colleagues in the above. I realised that the reason the vortices do not nucleate the same way in the simulations is because of the repulsion from the edges. Since we cannot simulate a sample size comparable to the experiment, the edges need to be considered. When the thin film is macroscopic, the edge repulsion is significantly < the repulsion between vortices. This results in nucleated vortices clustering near edges. On a microscopic scale, the edge repulsion becomes comparable to the repulsion between vortices and so we are left with the vortices dispersing throughout the sample. nobiasPureG10.mp4Is it possible to tune the strength of the repulsion from the boundary? This may also be a consequence of the pinning landscape. |
Unfortunately, I suspect that your experimental samples may simply be way too thick to accurately be modeled by a 2D model. Screening, the vortex-vortex interaction, and the vortex-edge interaction will all be very different in samples that are thick relative to xi and lambda. |
Changing the structure of the mesh at each time step would be extremely slow |
I was hoping that since its superconducting properties are strongly anisotropic and are dominated by the CuO planes that TDGL would work well for a 200nm thick film. I guess I can still observe the vortex dynamics throughout a pinning array without directly comparing the nucleation to MOI. |
Maybe you could 'fake' the inelastic scattering effect? Potentially you could interpolate the order parameter in the wake of vortex movement like you would blur an image and the degree of blurring is related to gamma? I'm not even sure if this is a realistic approach. This section with gamma = 10 reminded me of slime mold simulations that I did in the past. |
What are the dimensions of the MOI samples? |
5x5 mm in area, and 200nm thick. The samples were at roughly 10K to provide optimal resolution with MOI. |
Screening will definitely be a very big effect in that case. Let's say london_lambda = 5 um, so the effective penetration depth is Lambda = london_lambda^2/d = (5 um)^2 / (0.2 um) = 125 um << 5,000 um, meaning screening can't be neglected. Even including screening in a simulation of a much smaller geometry will not be sufficient, since the strength of screening is dictated by Lambda / (minimum sample dimension perpendicular to the applied field). It's clear from the top left MOI image that the magnetic field is essentially completely screened from the interior of the sample before vortices penetrate. To get the same effective strength of screening for your simulated geometry of L = Lx = Lx = 1000 nm, you could try artificially making london_lambda much smaller, such that london_lambda^2 / (d * L) = 125 / 5,000, i.e. london_lambda = 70 nm and running the simulation with screening included (I would recommend trying the GPU for this). This obviously will mean that the GL parameter kappa is much smaller than in real YBCO, but it should capture the effect of screening in a much more realistic way. The lower critical field will also be much larger with a shorter london_lambda. By the way, here is a London simulation using SuperScreen of the sheet current density and magnetic field for B_{z, applied} = 0.1 mT: |
I have been meaning to try SuperScreen as well! It looks really good! Thank you for the detailed explanation. Also, any reason you chose 5um for London Lambda instead of 0.150um? Was it to get a quick output from SuperScreen? And I just noticed the boundary around the 5x5mm sample, what is that? I took a look at the documentation and noticed that the mesh is generated such that it envelops everything even holes with some padding. Is this because it also calculates the inductance over vacuum? Edit: I just noticed in the second figure the non-zero field around the film, so yes this must be the case :) |
You should apply the reasoning I described using whatever value of london_lambda you think the samples have. Here's some references: https://hoffman.physics.harvard.edu/materials/ybco/. If london_lambda in the ab plane is really 150-200 nm, then the method I described may not work. The scaled london_lambda for your simulated geometry would only be about 3 nm. Unfortunately the real sample geometry makes it very difficult to apply 2D TDGL in any realistic way. In SuperScreen, the solve time is independent of Lambda, and xi is completely irrelevant. The simulation method is inherently self-consistent so, unlike in TDGL, there is no need to do an iterative calculation of the induced magnetic field. This makes SuperScreen fast regardless of the strength of screening.
In SuperScreen, the vacuum inside of holes always has to be meshed. Meshing the vacuum surrounding the film is optional (see below, where I set |
I started using Superscreen but noticed that the mesh generation takes an extremely long time for that 5000x5000um sample that you did. I also cannot get a similar output to you for the same parameters, this may be a result of my mesh size. Your computer seems to behave really well with the meshing stage, reaching 20000 it/s whereas mine can only handle maximum of 2000 it/s. I will try uninstalling Scipy and using conda to install like you suggested before. Sorry for posting Superscreen related issues here but I believe it is best when considering the context of the discussion.
|
Hmm, that doesn't make much sense to me. Try running this notebook on Google Colab and locally, and let me know if you still see a discrepancy: https://gist.github.com/loganbvh/cc5453195153f7b5831ea95174b4091f |
Okay it is now working, thank you! Not sure what happened, it could be because I did not plot the last entry in solutions with |
My PC just crashed halfway through a 2 day simulation :( Do you think I can just load the h5 file and resume it? I will test how long mesh generation takes in a bit. |
The mesh took me 15 seconds roughly to generate. In your first post with Superscreen above the rounded corners of the vacuum region looked like higher mesh density so I assumed you had used a smaller |
You may be able to load the import h5py
from tdgl.solution.data import TDGLData, get_data_range
with h5py.File(<h5-path>, "r") as h5file:
first, last = get_data_range(h5file)
class DummySolution:
pass
tdgl_data = TDGLData.from_hdf5(<h5-path>, last)
seed_solution = DummySolution()
seed_solution.tdgl_data = tdgl_data
solution = tdgl.solve(device, ..., seed_solution=seed_solution) |
Would that then start to append the rest of the simulation to the same file? |
No, you would have to specify a new
SuperScreen actually doesn't need to construct Voronoi cells at all. The numerical method that SuperScreen implements will work with practically any mesh - it is much more forgiving than pyTDGL in that way. I could add a progress bar for mesh smoothing, but to be honest, smoothing is completely optional and doesn't affect the simulation results very much in SuperScreen. |
The file cannot be loaded due to: |
I can add a method to calculate the total magnetization. However the London
model is completely linear - the Meissner super current density is directly
proportional to the magnetic field. This means that the London model
doesn’t know about vortex nucleation and there will never be any magnetic
hysteresis.
…On Sat, Oct 14, 2023 at 1:26 AM Kyroba ***@***.***> wrote:
In Superscreen, to get the magnetization of the film can we do the same
method using the sheet current?
[image: image]
<https://user-images.githubusercontent.com/34236089/275172996-cc6ab3e4-5ebc-482a-a3d5-c423f898345e.png>
The stream function provides the local magnetization but can we have an
output of the total as well to create a magnetic hysteresis plot (Bz vs Mz)?
—
Reply to this email directly, view it on GitHub
<#51 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AE3CFETTK6P642GMBWY75DLX7JEE3ANCNFSM6AAAAAA54IM7FE>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
I am not too familiar with these models, but I feel intuitively that you could obtain obtain a curve by keeping the scale of sheet current the same and adjusting field like this: Then taking the average pixel intensity over the entire film to determine a magnetization value at each field. What do you think? Technically, since we have the K data no need to do pixel intensity. I have not read too much on simulation techniques for superconductors but with the MOI data you could effectively determine magnetization via this method so I thought the same could be applied here. |
I was wondering if anyone had any details on the simulation parameters for YBCO. I understand almost all of them must be derived experimentally but I was just looking for a few estimates. Coherence length and penetration depth are easily available but gamma is a lot more difficult to find if at all. I have found a few papers where they provide their value of gamma but it is the ratio of anisotropies and not the same gamma as in PyTDGL.
I found a paper which had a phonon scattering time
t_in
approximately 20ps and a superconducting gapdel_0
of 20meV which provides a gamma of approximately 1210. I found another resource with GL relaxation timet_0
of 0.03 ps which I believe relates to the scattering time. Although, I am not sure how reliable this information is as I cannot access the paper it is using.This yields a gamma of roughly 0.29
So, I am left qualitatively watching the dynamics of the simulation to see which one closely resembles how vortices behave as I apply a current in an YBCO sample. I don't have the equipment at the moment to experimentally verify it myself.
The text was updated successfully, but these errors were encountered: