Jim Bowring edited this page Sep 5, 2016 · 9 revisions


2016.02.21 : A new release ET_Redux v3.4.3 (and all following releases) demonstrates the possibilities for SHRIMP data. See this video.



The initial aim of this project is to parse XML files produced by the SHRIMP control software, and replicate the initial stages of zircon U-Pb geochronology data-processing as implemented in Ken Ludwig's Excel add-in SQUID (i.e. 03 February 2011 version). In the short term, we aim to document the elementary algorithms in SQUID 2.50, and demonstrate their successful implementation outside of the restrictive Excel 2003 environment. In the longer term, thoroughly documented SQUID 2.50 algorithms will serve as both a useful starting point for potential redevelopment of SHRIMP data-reduction mathematics, and as a reference point against which such redevelopments can be benchmarked.

The current Java implementation of the SQUID 2.50 algorithms is Calamari.

SQUID 2.50 Data-reduction: Procedure

Following the successful development of a parser for XML files generated by the SHRIMP control software, the elementary calculations have been implemented essentially by example, using a single demo XML file (supplied by Geoscience Australia). The ET_Redux calculations are validated against intermediate steps preserved in Excel 2003-based SQUID workbooks ("SQUID-books").

A successfully parsed XML file comprises many "runs" (usually called "analyses" in the SHRIMP community; equivalent to "fractions" in IDTIMS). Each analysis comprises data acquisition at a set of XML "run-table entries" (i.e. nuclidic "species" of interest): the set of 10 species utilised in the demonstration (demo) XML are [196Zr2O], [204Pb], [204.05background], [206Pb], [207Pb], [208Pb], [238U], [248ThO], [254UO] and [270UO2]. During an analysis, data acquisition involves several cycles ("scans") through the set of species: each analysis in the demo XML comprises 6-scan data.

Each analysis in the demo XML therefore comprises 60 species/scan combinations (XML: "measurement", herein termed "peak"). Each peak comprises:

  1. species-specific data (XML: "data name = [species]"), acquired over 10 "integrations", each of which is the integer number of ions counted during 10 sequential time-segments of duration (count_time_sec / 10) seconds, and
  2. secondary beam monitor (SBM) data (XML: "data name = SBM"), acquired over the same 10 integrations.

[SBM counts reflect overall secondary beam intensity, which in turn often reflects incident primary beam intensity, and "SBM normalisation" (whereby species-specific counts are normalised to the concurrent SBM counts, as a method of mitigating variation in species-specific count rates arising simply from varying beam intensity) is a user option commonly utilised.]

Step 1: Transform XML into 'total counts at peak' sheet

Step 2: Background- and SBMzero-correction, and generation of 'total cps' columns

Step 3: Calculation of interpolated ratios of measured species

Step 4: Calculation of 'mean' ratio for each measured species for each analysis

Clone this wiki locally
You can’t perform that action at this time.
You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.
Press h to open a hovercard with more details.