Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Running VASP interactively throws a BrokenPipeError #218

Closed
raynol-dsouza opened this issue May 31, 2021 · 23 comments · Fixed by #222
Closed

Running VASP interactively throws a BrokenPipeError #218

raynol-dsouza opened this issue May 31, 2021 · 23 comments · Fixed by #222
Assignees
Labels
bug Something isn't working help wanted Extra attention is needed

Comments

@raynol-dsouza
Copy link
Contributor

The following code:

job = pr.create.job.Vasp('vasp_job')
job.structure = pr.create.structure.ase.bulk('Al', cubic=True).repeat(2)
job.set_kpoints(mesh=[3,3,3])
job.set_encut(encut=250.)
job.calc_static()
job.server.cores = 3
job.interactive_open()
for i in np.arange(3):
    print(i)
    job.run()
job.interactive_close()

throws the following error:

BrokenPipeError                           Traceback (most recent call last)
<ipython-input-9-3c3d382797c1> in <module>
      7 for i in np.arange(3):
      8     print(i)
----> 9     job.run()
     10 job.interactive_close()

~/pyiron_repos/pyiron_base/pyiron_base/generic/util.py in decorated(*args, **kwargs)
    211                         stacklevel=2
    212                     )
--> 213             return function(*args, **kwargs)
    214         return decorated
    215 

~/pyiron_repos/pyiron_base/pyiron_base/job/generic.py in run(self, delete_existing_job, repair, debug, run_mode, run_again)
    665                 self._run_if_submitted()
    666             elif status == "running":
--> 667                 self._run_if_running()
    668             elif status == "collect":
    669                 self._run_if_collect()

~/pyiron_repos/pyiron_base/pyiron_base/job/interactive.py in _run_if_running(self)
    163         """
    164         if self.server.run_mode.interactive:
--> 165             self.run_if_interactive()
    166         elif self.server.run_mode.interactive_non_modal:
    167             self.run_if_interactive_non_modal()

~/pyiron_repos/pyiron_atomistics/pyiron_atomistics/vasp/interactive.py in run_if_interactive(self)
    255 
    256     def run_if_interactive(self):
--> 257         self.run_if_interactive_non_modal()
    258         self._interactive_check_output()
    259         self._interactive_vasprun = Outcar()

~/pyiron_repos/pyiron_atomistics/pyiron_atomistics/vasp/interactive.py in run_if_interactive_non_modal(self)
    251                     self._logger.debug("Vasp library: " + text)
    252                     self._interactive_library.stdin.write(text + "\n")
--> 253             self._interactive_library.stdin.flush()
    254         self._interactive_fetch_completed = False
    255 

BrokenPipeError: [Errno 32] Broken pipe

Has anyone come across this error before? Is this related to #216?

@raynol-dsouza raynol-dsouza added bug Something isn't working help wanted Extra attention is needed labels May 31, 2021
@niklassiemer
Copy link
Member

At least it is not directly related, i.e. it is not the same error. The lammps calculations stop in the interactive_collect step at one Lammps internal step with an ArgumentError.

@raynol-dsouza
Copy link
Contributor Author

Hmm. The code snippet with Lammps/LammpsInteractive works fine for me.

@raynol-dsouza
Copy link
Contributor Author

I noticed that if I run it for a single step (i=1), it doesn't throw an error. For i=2, it throws this warning:

WARNING - VASP calculation exited before interactive_close() - already converged?

Seems to me that the job isn't running interactively?

The job_table shows the status as finished for i=1,2, and aborted for i=3.

@jan-janssen
Copy link
Member

Can you post your INCAR file ? I assume if you comment out the calc_static() line then it should work fine.

A typical INCAR should include:

POTIM = 0.0
IBRION = -1
INTERACTIVE = .TRUE.
NSW = 2000  # maximum number of possible steps 

@raynol-dsouza
Copy link
Contributor Author

@jan-janssen you're right. It works if I do not specify calc_static().

The INCAR file with calc_static() looks like this:

SYSTEM=vasp_job_stat #jobname
PREC=Accurate
ALGO=Fast
LREAL=.FALSE.
LWAVE=.FALSE.
LORBIT=0
ENCUT=250.0
IBRION=-1
NELM=100
NSW=0
INTERACTIVE=.TRUE.
POTIM=0.0
ISYM=0

while the INCAR file without it looks the same as what you commented.

@raynol-dsouza
Copy link
Contributor Author

I now end up with this warning while calling interactive_close():

WARNING - Unable to parse the vasprun.xml file. Will attempt to get data from OUTCAR

and this error after:

TypeError                                 Traceback (most recent call last)
<ipython-input-34-2443f5e00692> in <module>
      7     print(i)
      8     job_3.run()
----> 9 job_3.interactive_close()

~/pyiron_repos/pyiron_atomistics/pyiron_atomistics/vasp/interactive.py in interactive_close(self)
     88             self._output_parser = Output()
     89             if self["vasprun.xml"] is not None:
---> 90                 self.run()
     91 
     92     def interactive_energy_tot_getter(self):

~/pyiron_repos/pyiron_base/pyiron_base/generic/util.py in decorated(*args, **kwargs)
    211                         stacklevel=2
    212                     )
--> 213             return function(*args, **kwargs)
    214         return decorated
    215 

~/pyiron_repos/pyiron_base/pyiron_base/job/generic.py in run(self, delete_existing_job, repair, debug, run_mode, run_again)
    667                 self._run_if_running()
    668             elif status == "collect":
--> 669                 self._run_if_collect()
    670             elif status == "suspend":
    671                 self._run_if_suspended()

~/pyiron_repos/pyiron_base/pyiron_base/job/generic.py in _run_if_collect(self)
   1422             self.project.db.item_update(self._runtime(), self.job_id)
   1423         if self.status.collect:
-> 1424             if not self.convergence_check():
   1425                 self.status.not_converged = True
   1426             else:

~/pyiron_repos/pyiron_atomistics/pyiron_atomistics/vasp/base.py in convergence_check(self)
    433         """
    434         # Checks if sufficient empty states are present
--> 435         if not self.nbands_convergence_check():
    436             return False
    437         if "IBRION" in self["input/incar/data_dict"]["Parameter"]:

~/pyiron_repos/pyiron_atomistics/pyiron_atomistics/dft/job/generic.py in nbands_convergence_check(self)
    368             bool : True if the highest band is unoccupied, False if the highest band is occupied
    369         """
--> 370         return np.all(np.isclose(self["output/electronic_structure/occ_matrix"][:,:,-1], 0)) #shape is n_spin x n_kpoints x n_bands
    371 
    372     # Backward compatibility

TypeError: 'NoneType' object is not subscriptable

Am I not specifying something in the input?

PS. I'm sorry for the long comments. It's also my first time running VASP in pyiron.

@jan-janssen
Copy link
Member

@raynol-dsouza Can you open a pull request which motives the NSW parameter if it is set to 0 ? I would suggest to add it it https://github.com/pyiron/pyiron_atomistics/blob/master/pyiron_atomistics/vasp/interactive.py#L287 - currently the parameters are only modified if they are not set before.

@jan-janssen
Copy link
Member

@sudarsan-surendralal Can you take a look at nbands_convergence_check(self)? most likely we have to accept the occupation can be None.

@sudarsan-surendralal
Copy link
Member

@jan-janssen what's the way to find out if a job is interactive or not? We can ignore this check only for interactive calculations

@hari-ushankar
Copy link

hari-ushankar commented Jul 30, 2021

Hi everyone,

I just wanted to see if I could get some inputs on this particular bug.

For context, I use ProtocolQMMM to do VASP jobs interactively. It seems that when the ionic iterations hit the NSW value supplied in INCAR, the interactive job just exits without wrapping up into HDF5.

I'm still using an older version of pyiron since there was some issue with the compatibility of pyiron_contrib with the new pyiron_atomistics repo.

Any suggestions on how to tackle this? Is updating to newer codebases the only way to go?

My versions of code:
pyiron
pyiron_contrib

@raynol-dsouza
Copy link
Contributor Author

Hi @hari-ushankar,

Can you share a code snippet of how you define ProtocolQMMM and call run()? I am unsure how exactly the VASP interactive job runs for more than 1 step with the repos that you are using, considering that it threw the error mentioned in this issue when I tried running VASP interactively.

I have reworked protocols within the pyiron_contrib master branch to comply with changes in pyiron_atomistics, and I believe it shouldn't be that difficult for me to update ProtocolQMMM too. However, since I have never used the protocol myself, a short notebook with an example would help!

@jan-janssen jan-janssen reopened this Aug 2, 2021
@hari-ushankar
Copy link

Hi @raynol-dsouza ,

Thanks for following up. Here is a code snippet of how I run ProtocolQMMM:

from pyiron.project import Project 
from pyiron_contrib.protocol.compound.qmmm import ProtocolQMMM 
pr = Project('test-QMMM')

# setting up structure and species
host = 'Al'
solute = 'Ni'
bulk_struct = pr.create_ase_bulk(host, cubic=True).repeat((1, 1, 8)) # a simple structure 
## defining reference jobs here:

## MM job settings
ref_lammps = pr.create_job(pr.job_type.Lammps, 'ref_lammps')
ref_lammps.structure = bulk_struct.copy()
ref_lammps.potential = ref_lammps.list_potentials()[0]
ref_lammps.save()

## VASP settings(Kpoints,XC,input file tags) go here
ref_vasp = pr.create_job(pr.job_type.Vasp, 'ref_vasp')
ref_vasp.structure = sol_struct.copy()
ref_vasp.set_kpoints(mesh=[1, 1, 3]) 
ref_vasp.executable.executable_path = '~/pyiron/resources/vasp/bin/run_vasp_5.4.4_std_mpi.sh'
ref_vasp.server.cores = 4
ref_vasp.input.incar['NCORE'] = 2
ref_vasp.input.incar['ISYM'] = 0
ref_vasp.input.incar['NSW'] = 150
ref_vasp.save()

## simulation settings-steps,tol and # of shells
n_core_shells = 1
n_buffer_shells = 2
n_steps = 2000
f_tol = 0.01


## now creating a ProtocolQMMM job
qmmm_solute = pr.create_job(ProtocolQMMM, 'qmmm_solute_b2_Pb')
qmmm_solute.input.structure = bulk_struct.copy()
qmmm_solute.input.mm_ref_job_full_path = ref_lammps.path 
qmmm_solute.input.qm_ref_job_full_path = ref_vasp.path
qmmm_solute.input.seed_ids = [middle_id]
qmmm_solute.input.shell_cutoff = midpoint_1NN_2NN
qmmm_solute.input.n_core_shells = n_core_shells
qmmm_solute.input.n_buffer_shells = n_buffer_shells
qmmm_solute.input.seed_species = [solute]
qmmm_solute.input.n_steps = n_steps
qmmm_solute.input.f_tol = f_tol
qmmm_solute.server.cores = 4
qmmm_solute.server.queue='nodes'
qmmm_solute.input.filler_width = 8.0
qmmm_solute.input.vacuum_width = 3.0
qmmm_solute.set_output_whitelist(
    **{
        'calc_static_qm':{
            'energy_pot': 1,
            'positions':10
        },
        'calc_static_mm': {
            'energy_pot':1
        },
        'max_force_qm': {
            'amax': 1
             },
        'max_force_mm': {
            'amax': 1
        }
     }
)
qmmm_solute.run()

The job I define above runs to 150 iterations and then throws the BrokenPipeError. Here is the stack trace from that run (in time.out file of hdf5):

Traceback (most recent call last):
  File "~/miniconda3/envs/pyiron_MPIE/lib/python3.7/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "~/miniconda3/envs/pyiron_MPIE/lib/python3.7/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "~/pyiron/pyiron/pyiron/cli/__main__.py", line 2, in <module>
    main()
  File "~/pyiron/pyiron/pyiron/cli/__init__.py", line 61, in main
    args.cli(args)
  File "~/pyiron/pyiron/pyiron/cli/wrapper.py", line 37, in main
    submit_on_remote=args.submit
  File "~/pyiron/pyiron/pyiron/base/job/wrapper.py", line 147, in job_wrapper_function
    job.run()
  File "~/pyiron/pyiron/pyiron/base/job/wrapper.py", line 113, in run
    self.job.run_static()
  File "~/pyiron_contrib/pyiron_contrib/protocol/generic.py", line 642, in run_static
    self.execute()
  File "~/pyiron_contrib/pyiron_contrib/protocol/generic.py", line 636, in execute
    super(Protocol, self).execute()
  File "~/pyiron_contrib/pyiron_contrib/protocol/generic.py", line 433, in execute
    self.graph.active_vertex.execute()
  File "~/pyiron_contrib/pyiron_contrib/protocol/generic.py", line 328, in execute
    output_data = self.command(**self.input.resolve())
  File "~/pyiron_contrib/pyiron_contrib/protocol/primitive/one_state.py", line 172, in command
    self._job.run()
  File "~/pyiron/pyiron/pyiron/base/job/generic.py", line 688, in run
    self._run_if_running()
  File "~/pyiron/pyiron/pyiron/base/job/interactive.py", line 167, in _run_if_running
    self.run_if_interactive()
  File "~/pyiron/pyiron/pyiron/vasp/interactive.py", line 264, in run_if_interactive
    self.run_if_interactive_non_modal()
  File "~/pyiron/pyiron/pyiron/vasp/interactive.py", line 260, in run_if_interactive_non_modal
    self._interactive_library.stdin.flush()
BrokenPipeError: [Errno 32] Broken pipe

Also, to add a normal VASP interactive job runs fine on a node and the cluster queue system. Only when run with the ProtocolQMMM, this behaviour happens. I think it could be also related to how reference jobs are copied over when a Protocol job is run. This was the discussion from the pyiron_contrib repo.

Some insights and tips would be appreciated.

@raynol-dsouza
Copy link
Contributor Author

Hi @hari-ushankar,

I created a new branch on pyiron_contrib called qmmm_fix, which uses the latest pyiron_atomistics. The following code ran for me:

from pyiron_atomistics import Project 
import pyiron_contrib

pr = Project('test-QMMM')

# setting up structure and species
host = 'Al'
solute = 'Ni'
bulk_struct = pr.create.structure.bulk(host, cubic=True).repeat((1, 1, 8)) # a simple structure
sol_struct = pr.create.structure.bulk(solute, cubic=True).repeat((1, 1, 8)) # a simple structure 
potential = '2004--Mishin-Y--Ni-Al--LAMMPS--ipr2'

## defining reference jobs here:

## MM job settings
ref_lammps = pr.create.job.Lammps('ref_lammps')
ref_lammps.structure = bulk_struct.copy()
ref_lammps.potential = potential
ref_lammps.save()

## VASP settings(Kpoints,XC,input file tags) go here
ref_vasp = pr.create.job.Vasp('ref_vasp')
ref_vasp.structure = sol_struct.copy()
ref_vasp.set_kpoints(mesh=[1, 1, 3]) 
# ref_vasp.executable.executable_path = '~/pyiron/resources/vasp/bin/run_vasp_5.4.4_std_mpi.sh'
ref_vasp.interactive_prepare()
ref_vasp.save()

## simulation settings-steps, tol and # of shells
n_core_shells = 1
n_buffer_shells = 2
n_steps = 5
f_tol = 0.01

# now creating a ProtocolQMMM job
qmmm_solute = pr.create.job.ProtocolQMMM('qmmm_solute_b2_Pb')
qmmm_solute.input.structure = bulk_struct.copy()
qmmm_solute.input.mm_ref_job_full_path = ref_lammps.path 
qmmm_solute.input.qm_ref_job_full_path = ref_vasp.path
qmmm_solute.input.seed_ids = [0]
qmmm_solute.input.shell_cutoff = bulk_struct.get_neighbors().distances[0][0]
qmmm_solute.input.n_core_shells = n_core_shells
qmmm_solute.input.n_buffer_shells = n_buffer_shells
qmmm_solute.input.seed_species = [solute]
qmmm_solute.input.n_steps = n_steps
qmmm_solute.input.f_tol = f_tol
# qmmm_solute.server.cores = 4
# qmmm_solute.server.queue='nodes'
qmmm_solute.input.filler_width = 8.0
qmmm_solute.input.vacuum_width = 3.0
qmmm_solute.set_output_whitelist(
    **{
        'calc_static_qm':{
            'energy_pot': 1,
            'positions':10
        },
        'calc_static_mm': {
            'energy_pot':1
        },
        'max_force_qm': {
            'amax': 1
             },
        'max_force_mm': {
            'amax': 1
        }
     }
)
qmmm_solute.run()

I did however change a few parameters:

sol_struct = pr.create.structure.bulk(solute, cubic=True).repeat((1, 1, 8)) # a simple structure 
qmmm_solute.input.seed_ids = [0]
qmmm_solute.input.shell_cutoff = bulk_struct.get_neighbors().distances[0][0] 

-- since these were inputs to the protocol, but they weren't specified in the snippet you sent.

-- I changed the n_steps to 5, as this was just a test to see if the protocol runs.

-- the code commented out wasn't necessary for the test, however, will be necessary when you'll be running on your cluster.

Do note the changes in the VASP reference job. There was a bug that was raised in this issue, but fixed in #220. This is how I would define the VASP reference jobs now.

It would be great if you could run a test job using the branch qmmm_fix while on the latest pyiron_atomistics and let me know if your issue is fixed!

@hari-ushankar
Copy link

hari-ushankar commented Aug 9, 2021

Hi @raynol-dsouza ,

I tried running the example script you sent to check if things work fine.

But I have run into an issue with creating jobs using the new git repos.

For context, I have my new git repos stashed in a separate dir, and I sys.path.insert those git repos when I run the notebook.

import sys
sys.path.insert(0, '~/new_git_repos/pyiron_atomistics')
sys.path.insert(1,'~/new_git_repos/pyiron_contrib')
sys.path.insert(2,'~/new_git_repos/pyiron_base')
sys.path.insert(3,'~/new_git_repos/aimsgb')

from pyiron_atomistics import Project 
import pyiron_contrib

pr = Project('test-fix-QMMM')
# setting up structure and species
host = 'Al'
solute = 'Ni'
bulk_struct = pr.create.structure.bulk(host, cubic=True).repeat((1, 1, 8)) 

## defining reference jobs here:
## MM job settings
ref_lammps = pr.create.job.Lammps('ref_lammps')
ref_lammps.structure = bulk_struct.copy()
ref_lammps.potential = ref_lammps.list_potentials()[0]
ref_lammps.save()

The code fails here:

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-2-e224c3562a9a> in <module>
      2 ref_lammps.structure = bulk_struct.copy()
      3 ref_lammps.potential = ref_lammps.list_potentials()[0]
----> 4 ref_lammps.save()

~/new_git_repos/pyiron_base/pyiron_base/job/generic.py in save(self)
   1169         job_id = self.project.db.add_item_dict(self.db_entry())
   1170         self._job_id = job_id
-> 1171         self.refresh_job_status()
   1172         if self._check_if_input_should_be_written():
   1173             self.project_hdf5.create_working_directory()

~/new_git_repos/pyiron_base/pyiron_base/job/generic.py in refresh_job_status(self)
    433         if self.job_id:
    434             self._status = JobStatus(
--> 435                 initial_status=self.project.db.get_job_status(self.job_id),
    436                 db=self.project.db,
    437                 job_id=self.job_id,

~/new_git_repos/pyiron_base/pyiron_base/database/generic.py in get_job_status(self, job_id)
    704     def get_job_status(self, job_id):
    705         try:
--> 706             return self.get_item_by_id(item_id=job_id)["status"]
    707         except KeyError:
    708             return None

~/new_git_repos/pyiron_base/pyiron_base/database/generic.py in get_item_by_id(self, item_id)
    576         if np.issubdtype(type(item_id), np.integer):
    577             try:
--> 578                 return self.__get_items("id", int(item_id))[-1]
    579             except TypeError as except_msg:
    580                 raise TypeError(

~/new_git_repos/pyiron_base/pyiron_base/database/generic.py in __get_items(self, col_name, var)
    479         if not self._keep_connection:
    480             self.conn.close()
--> 481         return [dict(zip(col.keys(), col._mapping.values())) for col in row]
    482 
    483     def item_update(self, par_dict, item_id):

~/new_git_repos/pyiron_base/pyiron_base/database/generic.py in <listcomp>(.0)
    479         if not self._keep_connection:
    480             self.conn.close()
--> 481         return [dict(zip(col.keys(), col._mapping.values())) for col in row]
    482 
    483     def item_update(self, par_dict, item_id):

AttributeError: Could not locate column in row for column '_mapping'

Does this occur because of how jobs are created in the old pyiron repo vs the new pyiron_atomistics repo?

@raynol-dsouza
Copy link
Contributor Author

I believe the issue comes from the line:

ref_lammps.potential = ref_lammps.list_potentials()[0]

I had run into a similar issue a while ago, but I think it was addressed in a PR, I do not know which one.

Could you make sure that pyiron_atomistics is on the latest version (git), and if the issue persists, try:

ref_lammps.potential = '2004--Mishin-Y--Ni-Al--LAMMPS--ipr2'

and check if this works?

@hari-ushankar
Copy link

hari-ushankar commented Aug 9, 2021

My pyiron_atomistics version is:

commit 6c252d36679e183332272c71318a17b37e987b01
Merge: 934a2b0 ae2e5c4
Author: Niklas Siemer <70580458+niklassiemer@users.noreply.github.com>
Date:   Mon Aug 9 20:44:11 2021 +0200

    Merge pull request #313 from pyiron/benchmark

    Introduce benchmark tests as additional workflow

Initially, I tried with ref_lammps.potential = '2004--Mishin-Y--Ni-Al--LAMMPS--ipr2'.

It didn't work that's why I switched to list_potentials()[0], also I'm having trouble deleting jobs in the job directory.

this is the error log:

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-2-5b63bad562d3> in <module>
      8 
      9 pr = Project('test-fix-QMMM')
---> 10 pr.remove_jobs(recursive=True)
     11 # setting up structure and species
     12 host = 'Al'

~/new_git_repos/pyiron_base/pyiron_base/project/generic.py in remove_jobs(self, recursive)
    990                     ).lower()
    991         if confirmed == "y":
--> 992             self.remove_jobs_silently(recursive=recursive)
    993         else:
    994             print(f"No jobs removed from '{self.base_name}'.")

~/new_git_repos/pyiron_base/pyiron_base/project/generic.py in remove_jobs_silently(self, recursive)
   1005             raise ValueError('recursive must be a boolean')
   1006         if not self.view_mode:
-> 1007             for job_id in self.get_job_ids(recursive=recursive):
   1008                 if job_id not in self.get_job_ids(recursive=recursive):
   1009                     continue

~/new_git_repos/pyiron_base/pyiron_base/project/generic.py in get_job_ids(self, recursive)
    399             user=self.user,
    400             project_path=self.project_path,
--> 401             recursive=recursive,
    402         )
    403 

~/new_git_repos/pyiron_base/pyiron_base/database/jobtable.py in get_job_ids(database, sql_query, user, project_path, recursive)
    301             user=user,
    302             project_path=project_path,
--> 303             recursive=recursive
    304         )["id"]
    305     else:

~/new_git_repos/pyiron_base/pyiron_base/database/jobtable.py in get_jobs(database, sql_query, user, project_path, recursive, columns)
    268             project_path=project_path,
    269             recursive=recursive,
--> 270             columns=columns
    271         )
    272         if len(df) == 0:

~/new_git_repos/pyiron_base/pyiron_base/database/jobtable.py in job_table(database, sql_query, user, project_path, recursive, columns, all_columns, sort_by, max_colwidth, full_table, element_lst, job_name_contains)
    211             project_path=project_path,
    212             recursive=recursive,
--> 213             element_lst=element_lst,
    214         )
    215         if full_table:

~/new_git_repos/pyiron_base/pyiron_base/database/jobtable.py in _job_dict(database, sql_query, user, project_path, recursive, job, sub_job_name, element_lst)
    113 
    114     s.logger.debug("sql_query: %s", str(dict_clause))
--> 115     return database.get_items_dict(dict_clause)
    116 
    117 

~/new_git_repos/pyiron_base/pyiron_base/database/generic.py in get_items_dict(self, item_dict, return_all_columns)
    700         if not self._keep_connection:
    701             self.conn.close()
--> 702         return [dict(zip(col.keys(), col._mapping.values())) for col in row]
    703 
    704     def get_job_status(self, job_id):

~/new_git_repos/pyiron_base/pyiron_base/database/generic.py in <listcomp>(.0)
    700         if not self._keep_connection:
    701             self.conn.close()
--> 702         return [dict(zip(col.keys(), col._mapping.values())) for col in row]
    703 
    704     def get_job_status(self, job_id):

AttributeError: Could not locate column in row for column '_mapping'

Edit:

I tried using ref_lammps.potential = '2004--Mishin-Y--Ni-Al--LAMMPS--ipr2' on a separate project.

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-4-aecc5fc668ad> in <module>
      1 ref_lammps = pr.create.job.Lammps('ref_lammps')
      2 ref_lammps.structure = bulk_struct.copy()
----> 3 ref_lammps.potential = '2004--Mishin-Y--Ni-Al--LAMMPS--ipr2'
      4 ref_lammps.save()

~/new_git_repos/pyiron_atomistics/pyiron_atomistics/lammps/base.py in potential(self, potential_filename)
    179                 potential_filename = potential_filename.split(".lmp")[0]
    180             potential_db = LammpsPotentialFile()
--> 181             potential = potential_db.find_by_name(potential_filename)
    182         elif isinstance(potential_filename, pd.DataFrame):
    183             potential = potential_filename

~/new_git_repos/pyiron_atomistics/pyiron_atomistics/atomistics/job/potentials.py in find_by_name(self, potential_name)
     75         if not mask.any():
     76             raise ValueError("Potential '{}' not found in database.".format(
---> 77                              potential_name))
     78         return self._potential_df[mask]
     79 

ValueError: Potential '2004--Mishin-Y--Ni-Al--LAMMPS--ipr2' not found in database.

@raynol-dsouza
Copy link
Contributor Author

pyiron_atomistics looks like it is on the right branch. What about pyiron_base?

@hari-ushankar
Copy link

pyiron_base's version:

commit e3aa971b8f3d7758af42257f4c4276fdd33f2c6c
Merge: 8b62baa 151f1da
Author: MuhammadHassani <mhassani86@gmail.com>
Date:   Wed Aug 4 11:40:11 2021 +0200

    Merge pull request #361 from pyiron/postgres_performance

    Postgres performance

@raynol-dsouza
Copy link
Contributor Author

raynol-dsouza commented Aug 9, 2021

pyiron_base's version:

commit e3aa971b8f3d7758af42257f4c4276fdd33f2c6c
Merge: 8b62baa 151f1da
Author: MuhammadHassani mhassani86@gmail.com
Date: Wed Aug 4 11:40:11 2021 +0200

  Merge pull request #361 from pyiron/postgres_performance

  Postgres performance

This looks more like a database issue to me. @pmrv @muh-hassani any inputs? I cannot reproduce these errors on both, my local machine and my Garching account.

I tried using ref_lammps.potential = '2004--Mishin-Y--Ni-Al--LAMMPS--ipr2' on a separate project.

What output do you get if you just use ref_lammps.list_potentials()?

@hari-ushankar
Copy link

I get all the potentials associated with Al, the list goes on:

['Al_Mg_Mendelev_eam',
 'Zope_Ti_Al_2003_eam',
 'Al_H_Ni_Angelo_eam',
 'AlPb_Landa',
 'AlNbTi_Farkas',
 'AlPb_Landa_3_9796',
 'Al_Mg_Mendelev_eam',
 'Zope_Ti_Al_2003_eam',
 'Al_H_Ni_Angelo_eam',
 '2018--Dickel-D-E--Mg-Al-Zn--LAMMPS--ipr1',
 '2000--Landa-A--Al-Pb--LAMMPS--ipr1',
 '2004--Zhou-X-W--Al--LAMMPS--ipr2',
..........
..........

@raynol-dsouza
Copy link
Contributor Author

Hmm. The potentials list seems to be outdated as well. I'm afraid I cannot be of much help at the moment. Let's wait till @pmrv and @muh-hassani have a look at these errors.

@max-hassani
Copy link
Member

max-hassani commented Aug 9, 2021

@hari-ushankar
From your previous comments, I tried the following:

from pyiron_atomistics import Project 
import pyiron_contrib

pr = Project('test-QMMM')

# setting up structure and species
host = 'Al'
solute = 'Ni'
bulk_struct = pr.create.structure.bulk(host, cubic=True).repeat((1, 1, 8)) # a simple structure
sol_struct = pr.create.structure.bulk(solute, cubic=True).repeat((1, 1, 8)) # a simple structure 
ref_lammps = pr.create.job.Lammps('ref_lammps')
ref_lammps.structure = bulk_struct.copy()
ref_lammps.potential_list

The list I get, is different than what you have:

['Al_Ca_Mg_MEAM_Jang_HS_2020',
 'Al_Zn_Mg_MEAM_Jang_HS_2020',
 '1995--Angelo-J-E--Ni-Al-H--LAMMPS--ipr1',
 '1996--Farkas-D--Nb-Ti-Al--LAMMPS--ipr1',
 '1997--Liu-X-Y--Al-Mg--LAMMPS--ipr1',
 '1998--Liu-X-Y--Al-Mg--LAMMPS--ipr1',
 '1999--Liu-X-Y--Al-Cu--LAMMPS--ipr1',
 '1999--Mishin-Y--Al--LAMMPS--ipr1',
 '2000--Landa-A--Al-Pb--LAMMPS--ipr1',
 '2000--Sturgeon-J-B--Al--LAMMPS--ipr1',
 '2002--Mishin-Y--Ni-Al--LAMMPS--ipr1',
 '2003--Lee-B-J--Al--LAMMPS--ipr1',
 '2003--Zope-R-R--Al--LAMMPS--ipr1',
 '2003--Zope-R-R--Ti-Al--LAMMPS--ipr1',
 '2004--Liu-X-Y--Al--LAMMPS--ipr1',
 '2004--Mishin-Y--Ni-Al--LAMMPS--ipr1',
 '2004--Mishin-Y--Ni-Al--LAMMPS--ipr2',
 '2004--Zhou-X-W--Al--LAMMPS--ipr2',
 '2005--Mendelev-M-I--Al-Fe--LAMMPS--ipr1',
...
]

Can you check your .pyiron file in your home directory? What is the resource path?
I have to mention that I am using the conda packages from pyiron/dev module. I guess something is wrong with your .pyiron config file. Can you also check echo $CONDA_PREFIX from your terminal and see what it prints? Since potentials are stored under $CONDA_PREFIX/share/iprpy/Potential/. If pyiron/dev module is also not loaded correctly, the potentials are not also found!
Update: @hari-ushankar, I just noticed that you are not from MPIE. Maybe the above comments are not very useful to you! You can try to update, iprpy-data from conda-forge (https://anaconda.org/conda-forge/iprpy-data). This will provide all the recent potentials for you.

@hari-ushankar
Copy link

Hi @muh-hassani ,

Thanks for the suggestion. I did install iprpy-data from conda-forge and I see the list of potentials now.

But I think I get the error when I save the job using save().

For context, I'm manually adding package paths into my jupyter notebook to run my pyiron jobs. But I'm still using the old pyiron repo and maybe this is causing the issue?

I'm a little bit worried that moving to pyiron_atomistics and using the recent codebase will break the original ProtocolQMMM code.

I suppose I'll try updating pyiron on conda and see if it resolves the issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working help wanted Extra attention is needed
Projects
None yet
Development

Successfully merging a pull request may close this issue.

8 participants