You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
With lingpy's coverage method, we can search for mutual coverage in the data. However, for the CLICS case, if we want to have the three/four subsets (1000+ concepts for 300 languages, 500+ concepts for 500 languages, 250+ concepts for 1000 languages, one meta-set with all data), more sophistication is needed. I imagine using the concepticon to pre-analyse a couple of about 10 promising datasets in lexibank. Since coverage varies inside each dataset, however, the methods should further test for individual coverage for each language variety in the data. It is not yet clear how to do this in concrete, as the problem is not entirely trivial, but an approximation could preselect the most promising concepts based on concept coverage and then test for each language variety in the data, whether it conforms to a certain coverage threshold. These values would then be reported for each release.
The text was updated successfully, but these errors were encountered:
With lingpy's coverage method, we can search for mutual coverage in the data. However, for the CLICS case, if we want to have the three/four subsets (1000+ concepts for 300 languages, 500+ concepts for 500 languages, 250+ concepts for 1000 languages, one meta-set with all data), more sophistication is needed. I imagine using the concepticon to pre-analyse a couple of about 10 promising datasets in lexibank. Since coverage varies inside each dataset, however, the methods should further test for individual coverage for each language variety in the data. It is not yet clear how to do this in concrete, as the problem is not entirely trivial, but an approximation could preselect the most promising concepts based on concept coverage and then test for each language variety in the data, whether it conforms to a certain coverage threshold. These values would then be reported for each release.
The text was updated successfully, but these errors were encountered: