Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Resources for Simulation Parameters of YBCO #51

Closed
Kyroba opened this issue Oct 11, 2023 · 31 comments
Closed

Resources for Simulation Parameters of YBCO #51

Kyroba opened this issue Oct 11, 2023 · 31 comments

Comments

@Kyroba
Copy link

Kyroba commented Oct 11, 2023

I was wondering if anyone had any details on the simulation parameters for YBCO. I understand almost all of them must be derived experimentally but I was just looking for a few estimates. Coherence length and penetration depth are easily available but gamma is a lot more difficult to find if at all. I have found a few papers where they provide their value of gamma but it is the ratio of anisotropies and not the same gamma as in PyTDGL.

I found a paper which had a phonon scattering time t_in approximately 20ps and a superconducting gap del_0 of 20meV which provides a gamma of approximately 1210. I found another resource with GL relaxation time t_0 of 0.03 ps which I believe relates to the scattering time. Although, I am not sure how reliable this information is as I cannot access the paper it is using.

image

This yields a gamma of roughly 0.29

So, I am left qualitatively watching the dynamics of the simulation to see which one closely resembles how vortices behave as I apply a current in an YBCO sample. I don't have the equipment at the moment to experimentally verify it myself.

@loganbvh
Copy link
Owner

I think \tau_0 in that slide is the same as \tau_0 in pyTDGL, i.e., \tau_0 = \mu_0\sigma\lambda^2. The values for \xi_0 and \lambda_0 in that slide look reasonable. I have no idea about the value of \gamma in YBCO.

Here is one estimate of the inelastic scattering rate for YBCO (inset of right plot): \tau_{in} = 10-100 ps over some range of temperatures. It seems like everyone agrees that the inelastic scattering time is strongly temperature dependent in the cuprates.

image

PDF of this paper:

1-s2.0-0921453495004114-main.pdf

@Kyroba
Copy link
Author

Kyroba commented Oct 12, 2023

Since TDGL is well defined near Tc is it best practice to use the temperature dependent parameters for coherence length and penetration depth? I have noticed that most papers just use the value near 77K even if Tc is roughly 90k.

Oh and since 81.6K resulted in a \tau_{in} = 10 ps I would expect if we extrapolate to 90K gamma should tend towards 0. It is difficult to know whether to use values near Tc or just the temperature in the real experiment.

@loganbvh
Copy link
Owner

I think you should use the temperature-dependent value of all parameters, evaluated at the temperature of the experiment.

In some formulations of TDGL, you input the T=0 values of the material parameters and the ratio t=T/T_c. The functional form of the temperature dependence \xi(t), \lambda(t) is then included in the TDGL equations. However in pyTDGL, you should specify the value of the parameters that you think the material has at the temperature of the experiment.

There is the question of how close to T_c is close enough for GL theory to be valid. I don't really know the answer - people often apply GL theory even far below T_c for lack of a better model. However, I have seen papers claiming at the "GL region" is T >= 0.85 * T_c (for example, Fig. 16 of this paper https://arxiv.org/abs/2205.15000). For optimally doped YBCO at LN2 temperature you have 77 K / 93 K = 0.83, so it is pretty close.

@Kyroba
Copy link
Author

Kyroba commented Oct 13, 2023

Anyway to get around the increase in computation time when gamma >> 10? Simulation times went from roughly 12 hours with gamma=1 to 80 hours with gamma=100. Maybe using the GPU versions of the sparse_solver might be beneficial here? I will try it soon.

@Kyroba
Copy link
Author

Kyroba commented Oct 13, 2023

I also found a paper that has the temp dependence of YBCO superconducting energy gap if anyone was interested:
Wang, Ji & Li, Hao & Cho, Ethan & LeFebvre, Jay & Pratt, Kevin & Cybart, Shane. (2020). Portable Solid Nitrogen Cooling System for High Transition Temperature Superconductive Electronics. IEEE Transactions on Applied Superconductivity. PP. 1-1. 10.1109/TASC.2020.2986324.
image

So del = 2.5 meV roughly for 85K, and if we extrapolate for scattering time in the paper you sent t = 5ps roughly. This results in a gamma = 2 * t * del / hbar = 38, so an order of magnitude approximation of gamma = 10 for YBCO should suffice to improve sim time.

@loganbvh
Copy link
Owner

Large gamma requires the solver to choose a very small time step, which is why the simulation time increases so much. I am not sure if there is a way around it - I will have to think about it. In the meantime, your rationale for using gamma = 10 makes sense to me. The very large gap of YBCO makes things difficult... Using tdgl.SolverOptions.gpu = True should help a bit, although I don't think it will help more for large gamma than for small gamma.

@Kyroba
Copy link
Author

Kyroba commented Oct 13, 2023

I'm not sure how it is done in the backend, but could you use an adaptive mesh proportional to the Laplacian at mesh sites? You would still have a maximum global edge length to maintain a somewhat accurate simulation before the introduction of any fluctuations to the order parameter. I am not sure how computationally expensive it would be to adjust the mesh at each time step, but perhaps the mesh generation could be done on the GPU as well? I have no idea how difficult this kind of implementation would be.

@Kyroba
Copy link
Author

Kyroba commented Oct 13, 2023

image

I have been trying to simulate the nucleation of vortices similar to how it is shown in MOI by my colleagues in the above. I realised that the reason the vortices do not nucleate the same way in the simulations is because of the repulsion from the edges. Since we cannot simulate a sample size comparable to the experiment, the edges need to be considered. When the thin film is macroscopic, the edge repulsion is significantly < the repulsion between vortices. This results in nucleated vortices clustering near edges. On a microscopic scale, the edge repulsion becomes comparable to the repulsion between vortices and so we are left with the vortices dispersing throughout the sample.

nobiasPureG10.mp4

Is it possible to tune the strength of the repulsion from the boundary? This may also be a consequence of the pinning landscape.

@loganbvh
Copy link
Owner

Unfortunately, I suspect that your experimental samples may simply be way too thick to accurately be modeled by a 2D model. Screening, the vortex-vortex interaction, and the vortex-edge interaction will all be very different in samples that are thick relative to xi and lambda.

@loganbvh
Copy link
Owner

I am not sure how computationally expensive it would be to adjust the mesh at each time step

Changing the structure of the mesh at each time step would be extremely slow

@Kyroba
Copy link
Author

Kyroba commented Oct 13, 2023

I was hoping that since its superconducting properties are strongly anisotropic and are dominated by the CuO planes that TDGL would work well for a 200nm thick film. I guess I can still observe the vortex dynamics throughout a pinning array without directly comparing the nucleation to MOI.

@Kyroba
Copy link
Author

Kyroba commented Oct 13, 2023

I am not sure how computationally expensive it would be to adjust the mesh at each time step

Changing the structure of the mesh at each time step would be extremely slow

Maybe you could 'fake' the inelastic scattering effect? Potentially you could interpolate the order parameter in the wake of vortex movement like you would blur an image and the degree of blurring is related to gamma? I'm not even sure if this is a realistic approach.

image

This section with gamma = 10 reminded me of slime mold simulations that I did in the past.

@loganbvh
Copy link
Owner

What are the dimensions of the MOI samples?

@Kyroba
Copy link
Author

Kyroba commented Oct 13, 2023

5x5 mm in area, and 200nm thick. The samples were at roughly 10K to provide optimal resolution with MOI.

@loganbvh
Copy link
Owner

Screening will definitely be a very big effect in that case. Let's say london_lambda = 5 um, so the effective penetration depth is Lambda = london_lambda^2/d = (5 um)^2 / (0.2 um) = 125 um << 5,000 um, meaning screening can't be neglected. Even including screening in a simulation of a much smaller geometry will not be sufficient, since the strength of screening is dictated by Lambda / (minimum sample dimension perpendicular to the applied field). It's clear from the top left MOI image that the magnetic field is essentially completely screened from the interior of the sample before vortices penetrate.

To get the same effective strength of screening for your simulated geometry of L = Lx = Lx = 1000 nm, you could try artificially making london_lambda much smaller, such that london_lambda^2 / (d * L) = 125 / 5,000, i.e. london_lambda = 70 nm and running the simulation with screening included (I would recommend trying the GPU for this). This obviously will mean that the GL parameter kappa is much smaller than in real YBCO, but it should capture the effect of screening in a much more realistic way. The lower critical field will also be much larger with a shorter london_lambda.

By the way, here is a London simulation using SuperScreen of the sheet current density and magnetic field for B_{z, applied} = 0.1 mT:

image

image

@Kyroba
Copy link
Author

Kyroba commented Oct 13, 2023

I have been meaning to try SuperScreen as well! It looks really good! Thank you for the detailed explanation. Also, any reason you chose 5um for London Lambda instead of 0.150um? Was it to get a quick output from SuperScreen? And I just noticed the boundary around the 5x5mm sample, what is that? I took a look at the documentation and noticed that the mesh is generated such that it envelops everything even holes with some padding. Is this because it also calculates the inductance over vacuum?

image

Edit: I just noticed in the second figure the non-zero field around the film, so yes this must be the case :)

@loganbvh
Copy link
Owner

Also, any reason you chose 5um for London Lambda instead of 0.150um?

You should apply the reasoning I described using whatever value of london_lambda you think the samples have. Here's some references: https://hoffman.physics.harvard.edu/materials/ybco/. If london_lambda in the ab plane is really 150-200 nm, then the method I described may not work. The scaled london_lambda for your simulated geometry would only be about 3 nm. Unfortunately the real sample geometry makes it very difficult to apply 2D TDGL in any realistic way.

In SuperScreen, the solve time is independent of Lambda, and xi is completely irrelevant. The simulation method is inherently self-consistent so, unlike in TDGL, there is no need to do an iterative calculation of the induced magnetic field. This makes SuperScreen fast regardless of the strength of screening.

I just noticed the boundary around the 5x5mm sample, what is that? I took a look at the documentation and noticed that the mesh is generated such that it envelops everything even holes with some padding.

In SuperScreen, the vacuum inside of holes always has to be meshed. Meshing the vacuum surrounding the film is optional (see below, where I set buffer=0 when generating the mesh). By default, a small region of vacuum around the film is meshed because one often wants to visualize the magnetic field outside the sample as well.

image

image

@Kyroba
Copy link
Author

Kyroba commented Oct 14, 2023

I started using Superscreen but noticed that the mesh generation takes an extremely long time for that 5000x5000um sample that you did. I also cannot get a similar output to you for the same parameters, this may be a result of my mesh size. Your computer seems to behave really well with the meshing stage, reaching 20000 it/s whereas mine can only handle maximum of 2000 it/s. I will try uninstalling Scipy and using conda to install like you suggested before. Sorry for posting Superscreen related issues here but I believe it is best when considering the context of the discussion.

length_units = "um"
# Material parameters
london_lambda = 5
d = 0.2
layers = [sc.Layer("base", london_lambda=london_lambda, thickness=d, z0=0)]

# Device geometry
total_width = 5000
total_length = 5000

films = [sc.Polygon("film", layer="base", points=box(total_width, total_length))]
device = sc.Device(
    "pure_square",
    layers=layers,
    films=films,
    #holes=antidots,
    length_units=length_units,
)
device.make_mesh(max_edge_length=100, smooth=100)
fig, ax = device.plot_mesh(show_sites=False)
_ = device.plot_polygons(ax=ax, color="k")

image

applied_field = sc.sources.ConstantField(0.1)

solutions = sc.solve(
    device=device,
    applied_field=applied_field,
    field_units="mT",
    current_units="mA",
)

image
image

@loganbvh
Copy link
Owner

Hmm, that doesn't make much sense to me. Try running this notebook on Google Colab and locally, and let me know if you still see a discrepancy: https://gist.github.com/loganbvh/cc5453195153f7b5831ea95174b4091f

@Kyroba
Copy link
Author

Kyroba commented Oct 14, 2023

Okay it is now working, thank you! Not sure what happened, it could be because I did not plot the last entry in solutions with _ = solutions[-1].plot_currents().

@loganbvh
Copy link
Owner

loganbvh commented Oct 14, 2023

Does the meshing still take a very long time? This takes about 23 seconds on my laptop:

image

@Kyroba
Copy link
Author

Kyroba commented Oct 14, 2023

My PC just crashed halfway through a 2 day simulation :( Do you think I can just load the h5 file and resume it? I will test how long mesh generation takes in a bit.

@Kyroba
Copy link
Author

Kyroba commented Oct 14, 2023

The mesh took me 15 seconds roughly to generate. In your first post with Superscreen above the rounded corners of the vacuum region looked like higher mesh density so I assumed you had used a smaller max_edge_length. It would be nice to have a similar mesh generation status like in pyTDGL so that if there are malformed cells we can quickly adjust the settings.

@loganbvh
Copy link
Owner

Do you think I can just load the h5 file and resume it?

You may be able to load the TDGLData from the H5 file and use it as the seed solution for a new simulation. I have not tried this, but I think it should work.

import h5py

from tdgl.solution.data import TDGLData, get_data_range

with h5py.File(<h5-path>, "r") as h5file:
    first, last = get_data_range(h5file)

class DummySolution:
    pass

tdgl_data = TDGLData.from_hdf5(<h5-path>, last)
seed_solution = DummySolution()
seed_solution.tdgl_data = tdgl_data

solution = tdgl.solve(device, ..., seed_solution=seed_solution)

@Kyroba
Copy link
Author

Kyroba commented Oct 14, 2023

Would that then start to append the rest of the simulation to the same file?

@loganbvh
Copy link
Owner

Would that then start to append the rest of the simulation to the same file?

No, you would have to specify a new options.output_file

It would be nice to have a similar mesh generation status like in pyTDGL so that if there are malformed cells we can quickly adjust the settings.

SuperScreen actually doesn't need to construct Voronoi cells at all. The numerical method that SuperScreen implements will work with practically any mesh - it is much more forgiving than pyTDGL in that way. I could add a progress bar for mesh smoothing, but to be honest, smoothing is completely optional and doesn't affect the simulation results very much in SuperScreen.

@Kyroba
Copy link
Author

Kyroba commented Oct 14, 2023

The file cannot be loaded due to: Unable to open file (bad object header version number). I tried also visualizing it but it results in the same error. Perhaps it needs some data to tell it the file ends here and then it can be loaded?

@Kyroba
Copy link
Author

Kyroba commented Oct 14, 2023

In Superscreen, to get the magnetization of the film can we do the same method using the sheet current?

image

The stream function provides the local magnetization but can we have an output of the total as well to create a magnetic hysteresis plot (Bz vs Mz)?

@loganbvh
Copy link
Owner

loganbvh commented Oct 14, 2023 via email

@Kyroba
Copy link
Author

Kyroba commented Oct 14, 2023

I am not too familiar with these models, but I feel intuitively that you could obtain obtain a curve by keeping the scale of sheet current the same and adjusting field like this:

Bz = 100mT
image
Bz = 1000mT
image

Then taking the average pixel intensity over the entire film to determine a magnetization value at each field. What do you think? Technically, since we have the K data no need to do pixel intensity. I have not read too much on simulation techniques for superconductors but with the MOI data you could effectively determine magnetization via this method so I thought the same could be applied here.

@loganbvh
Copy link
Owner

Yes, you can calculate Mz vs. Bz, but the curve will always be a straight line with negative slope that passes through the origin because the supercurrent density scales linearly with the magnetic field

image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants