create an NXtomo (from scratch)

The goal of this tutorial is to create an NXtomo from scratch. In the sense without converting directly from a bliss scan.

description of the example

Let say we want to create an NXtomo matching the following sequence

frame index

Rotation Angle (in degree)

Frame Type

control Image Key

Image Key

0

0

Dark

2

2

1

0

Flat

1

1

2-201

0 - 89.9

Projection

0

0

202

90

Flat

1

1

203 - 402

90 - 180

Projection

0

0

403

180

Flat

1

1

404

90

Alignment

-1

0

create dummy dataset

Note: In order to siplify the setup we will take only create one dark and one flat frame that we will reuse. But keep in mind that those are raw data. So in ‘real’ life this is expected that you will have several frame for dark and several frames for flats and there will differ of course depending on when you acquire them.

[1]:
# %pylab inline
[2]:
from skimage.data import shepp_logan_phantom
from skimage.transform import radon
import numpy

create some projection

[3]:
phantom = shepp_logan_phantom()
projections = {}
proj_rotation_angles = numpy.linspace(0., 180., max(phantom.shape), endpoint=False)
sinogram = radon(phantom, theta=proj_rotation_angles)

sinograms = numpy.asarray([sinogram] * 20)
[4]:
# imshow(phantom)
[5]:
radios = numpy.swapaxes(sinograms, 2, 0)
radios = numpy.swapaxes(radios, 1, 2)
[6]:
# imshow(radios[0])

create some dark

[7]:
max_shape = max(phantom.shape)
[8]:
dark = numpy.zeros((20, max_shape))
[9]:
# imshow(dark)

create some flat

[10]:
flat = numpy.ones((20, max_shape))

add some noise to radios

[11]:
tmp_radios = []
for radio in radios:
    tmp_radios.append(dark + radio * (flat - dark))
radios = numpy.asarray(tmp_radios)
[12]:
# imshow(radios[0])

create some alignment

In order to keep it simple we will pick one of the radio created

[13]:
alignment = radios[200]
alignment_angle = proj_rotation_angles[200]

create an NXtomo that fits the sequence we want

[14]:
import nxtomo
[15]:
from nxtomo import NXtomo
[16]:
my_nxtomo = NXtomo()

provide mandatory data for Contrast Tomography

Mandatory information for contrast tomography are: * detector frames: raw data * image-key (control): for each frame the “type” of frame (projections, flats, darks and alignment). * rotation angles: for each frame the rotation angle in degree

detector frames

to fit the sequence describe previously we need to create the following sequence: dark, flat, first half of the projections, flat, second half of the projections, flat and alignment frame.

And we need to provide them as a numpy array (3d)

[17]:
# reshape dark, flat and alignment that need to be 3d when numpy.concatenate is called
darks_stack = dark.reshape(1, dark.shape[0], dark.shape[1])
flats_stack = flat.reshape(1, flat.shape[0], flat.shape[1])
alignment_stack = alignment.reshape(1, alignment.shape[0], alignment.shape[1])

assert darks_stack.ndim == 3
assert flats_stack.ndim == 3
assert alignment_stack.ndim == 3
assert radios.ndim == 3
print("radios shape is", radios.shape)
# create the array
data = numpy.concatenate([
    darks_stack,
    flats_stack,
    radios[:200],
    flats_stack,
    radios[200:],
    flats_stack,
    alignment_stack,
])
assert data.ndim == 3
print(data.shape)
# then register the data to the detector
my_nxtomo.instrument.detector.data = data
radios shape is (400, 20, 400)
(405, 20, 400)

image key control

[18]:
from nxtomo.nxobject.nxdetector import ImageKey

image_key_control = numpy.concatenate([
    [ImageKey.DARK_FIELD] * 1,
    [ImageKey.FLAT_FIELD] * 1,
    [ImageKey.PROJECTION] * 200,
    [ImageKey.FLAT_FIELD] * 1,
    [ImageKey.PROJECTION] * 200,
    [ImageKey.FLAT_FIELD] * 1,
    [ImageKey.ALIGNMENT] * 1,
])

# insure with have the same number of frames and image key
assert len(image_key_control) == len(data)
# print position of flats in the sequence
print("flats are at indexes", numpy.where(image_key_control == ImageKey.FLAT_FIELD))
# then register the image keys to the detector
my_nxtomo.instrument.detector.image_key_control = image_key_control
flats are at indexes (array([  1, 202, 403]),)

rotation angle

[19]:
rotation_angle = numpy.concatenate([
    [0.0, ],
    [0.0, ],
    proj_rotation_angles[:200],
    [90.0, ],
    proj_rotation_angles[200:],
    [180.0, ],
    [90.0, ],
])
assert len(rotation_angle) == len(data)
# register rotation angle to the sample
my_nxtomo.sample.rotation_angle = rotation_angle

Field of view

field of view can either be Half or Full

[20]:
my_nxtomo.instrument.detector.field_of_view = "Full"

pixel size

[21]:
my_nxtomo.instrument.detector.x_pixel_size = my_nxtomo.instrument.detector.y_pixel_size = 1e-7  # pixel size must be provided in SI: meter

but for attribute with a unit you can specify the unit the value should be “converted to” using the ‘unit’ attribute like:

[22]:
my_nxtomo.instrument.detector.x_pixel_size = my_nxtomo.instrument.detector.y_pixel_size = 0.1
my_nxtomo.instrument.detector.x_pixel_size.unit = my_nxtomo.instrument.detector.y_pixel_size.unit = "micrometer"

When the unit is provided it will be stored as a property of the dataset. It must be interpreted by the software reading the NXtomo.

save the nx to disk

[23]:
import os
nx_tomo_file_path = os.path.join("resources_cfg_create_nxtomo_from_scratch", "nxtomo.nx")
my_nxtomo.save(file_path=nx_tomo_file_path, data_path="entry", overwrite=True)

check data saved

We can use some validator from tomoscan to insure we have enought data to be treated by nabu

[24]:
try:
    import tomoscan
except ImportError:
    has_tomoscan = False
else:
    from tomoscan.esrf import NXtomoScan
    from tomoscan.validator import ReconstructionValidator
    has_tomoscan = True

if has_tomoscan:
    scan = NXtomoScan(nx_tomo_file_path, entry="entry")
    validator = ReconstructionValidator(scan, check_phase_retrieval=False, check_values=True)
    assert validator.is_valid()

You can check the layout of the file to insure it seems valid as well

[25]:
from h5glance import H5Glance
H5Glance(nx_tomo_file_path)
[25]:
        • data → /entry/instrument/detector/data
        • image_key → /entry/instrument/detector/image_key
        • image_key_control → /entry/instrument/detector/image_key_control
        • rotation_angle → /entry/sample/rotation_angle
          • data [📋]: 405 × 20 × 400 entries, dtype: float64
          • field_of_view [📋]: scalar entries, dtype: UTF-8 string
          • image_key [📋]: 405 entries, dtype: int64
          • image_key_control [📋]: 405 entries, dtype: int64
          • x_pixel_size [📋]: scalar entries, dtype: float64
          • y_pixel_size [📋]: scalar entries, dtype: float64
          • name [📋]: scalar entries, dtype: UTF-8 string
          • probe [📋]: scalar entries, dtype: UTF-8 string
          • type [📋]: scalar entries, dtype: UTF-8 string
        • rotation_angle [📋]: 405 entries, dtype: float64
      • definition [📋]: scalar entries, dtype: UTF-8 string

A good pratice is also to check frames, image_key and rotation angles to insure values seems valid.

[26]:
# ! silx view resources_cfg_create_nxtomo_from_scratch/nxtomo.nx

reconstruct using nabu

now that we have a valid nxtomo we are able to reconstruct it using nabu.

We create a nabu configuration file for contrast tomography reconstruction name nabu-ct.conf in order to reconstruct one slice of the volume.

note: on the configuration you must disable take_logarithm due to the dataset.

if nabu is installed you can run it:

[27]:
# ! nabu nabu-cf.conf

provide mandatory data for Phase Tomography

in order to compute Phase Tomography you must also register: * incoming beam energy (in keV) * sample / detector distance (in meter)

we can take back the existing my_nxtomo and add it the missing elements

[28]:
my_nxtomo.energy = 12.5  # in keV by default
my_nxtomo.instrument.detector.distance = 0.2  # in meter

And then you can reconstruct it with phase retrieval from modifing the nabu configuration file.

provide more metadata

you can also provide x, y and z translation of the sample during the acquisition.

[29]:
my_nxtomo.sample.x_translation = [0, 12]

as a sample name, source information, start and end time

[30]:
my_nxtomo.sample.name = "my sample"
[31]:
from datetime import datetime
my_nxtomo.instrument.source.name = "ESRF"  # default value
my_nxtomo.instrument.source.type = "Synchrotron X-ray Source"  # default value
my_nxtomo.start_time = datetime.now()
my_nxtomo.end_time = datetime(2022, 2, 27)
[32]:
my_nxtomo.save(
    file_path=nx_tomo_file_path,
    data_path="entry",
    overwrite=True,
)

edit an NXtomo

You can load an NXtomo from disk in order to edit it and save once done

[33]:
nx_tomo_file_path = os.path.join("resources_cfg_create_nxtomo_from_scratch", "nxtomo.nx")
nx_tomo = NXtomo().load(nx_tomo_file_path, "entry", detector_data_as="as_numpy_array")
print("nx_tomo type is", type(nx_tomo))
print("nx_tomo energy is", nx_tomo.energy)
nx_tomo type is <class 'nxtomo.application.nxtomo.NXtomo'>
nx_tomo energy is 12.5 keV

Then you can modify your values as it was presented previously and overwrite the file.

[34]:
my_nxtomo.energy = 13.6
my_nxtomo.save(
    file_path=nx_tomo_file_path,
    data_path="entry",
    overwrite=True,
)
print("new energy is", NXtomo().load(nx_tomo_file_path, "entry").energy)
new energy is 13.6 keV

Note: the detector data is usually saved as a (h5py virtual dataset)[https://docs.h5py.org/en/stable/vds.html]. The amount of data assocaited can be heavy according to the acquisition and to the available memory. In order to allow a ‘smooth’ edition detector data can be load according to several strategies:

  • “as_data_url” (default): in this case each (Virtual Source)[https://docs.h5py.org/en/stable/vds.html#h5py.VirtualSource] will be saved as a DataUrl in order to ease it handling (see later on the tutorial)

  • “as_virtual_source”: retrieve original VirtualSource to allow edition of it

  • “as_numpy_array”: load all data in memory in order to modify it (and will dump the entire data). to avoid in case of “real” dataset. Can trigger huge IO.

[35]:
if os.path.exists(nx_tomo_file_path):
    os.remove(nx_tomo_file_path)
if os.path.exists("nxtomo_reconstruction.hdf5"):
    os.remove("nxtomo_reconstruction.hdf5")

Advance usage: Provide DataUrl to instrument.detector.data

The issue of using NXtomo we presented above is that the memory to handle data can be used (if you have a large number of projections and / or large detector).

The way to work around for now it to use provide DataUrl (that can be pointing to an external file). Then this is the job to the FrameAppender to handle those.

For this example we will first save metadata to the hdf5 (and maybe some frame) then you can append to the dataset series of frames sequentially with there rotation angle, image key…

create some dataset to external files

Here we simply create some dataset on some external files and record the DataUrl

note: those datasets must be 3D otherwise virtual dataset creation will fail

[36]:
from nxtomo.utils.frameappender import FrameAppender
from silx.io.url import DataUrl
import h5py

detector_data_urls = []
for i_file in range(5):
    os.makedirs("resources_cfg_create_nxtomo_from_scratch/external_files", exist_ok=True)
    external_file = os.path.join(f"resources_cfg_create_nxtomo_from_scratch/external_files/file_{i_file}.nx")
    with h5py.File(external_file, mode="w") as h5f:
        h5f["data"] = numpy.arange(
            start=(5 * 100 * 100 * i_file),
            stop=(5 * 100 * 100 * (i_file + 1))
        ).reshape([5, 100, 100])  # of course here this is most likely that you will load data from another file

    detector_data_urls.append(
        DataUrl(
            file_path=external_file,
            data_path="data",
            scheme="silx",
        )
    )

create a simple nxtomo but this time provide the list of DataUrl to the instrument.detector.data attribute

[37]:
my_large_nxtomo = NXtomo()

provide all information at the exception of frames. Here lets say we will have a dataset with only 180 projections

[38]:
my_large_nxtomo.instrument.detector.distance = 0.2
my_large_nxtomo.instrument.detector.x_pixel_size = my_large_nxtomo.instrument.detector.y_pixel_size = 1e-7
my_large_nxtomo.energy = 12.3
# ...
my_large_nxtomo.sample.rotation_angle = numpy.linspace(0, 180, 180, endpoint=False)
my_large_nxtomo.instrument.detector.image_key_control = [0] * 180  # 0 == Projection

provide the list of DataUrl to instrument.detector.data

[39]:
my_large_nxtomo.instrument.detector.data = detector_data_urls
[40]:
my_large_nxtomo.save("resources_cfg_create_nxtomo_from_scratch/my_large_nxtomo.nx", data_path="entry0000", overwrite=True)

note: this will create a virtual dataset under instrument/detector/data containing relative links from “my_large_nxtomo.nx” to other files.

Then you can see that the ‘data’ dataset now contains 180 frames (if you run several time the previous cell then it will continue appending data to it).

If an url is provided instead of a numpy array then it will create be used to create a virtual dataset and avoid duplicating data. But be carreful in this case you must keep relative position of the two files.

append frames must have the same dimensions otherwise the operation will fail

[41]:
H5Glance("resources_cfg_create_nxtomo_from_scratch/my_large_nxtomo.nx")
[41]:
        • incident_energy → /entry0000/instrument/beam/incident_energy
        • data → /entry0000/instrument/detector/data
        • image_key → /entry0000/instrument/detector/image_key
        • image_key_control → /entry0000/instrument/detector/image_key_control
        • rotation_angle → /entry0000/sample/rotation_angle
          • incident_energy [📋]: scalar entries, dtype: float64
          • data [📋]: 25 × 100 × 100 entries, dtype: int64
          • distance [📋]: scalar entries, dtype: float64
          • field_of_view [📋]: scalar entries, dtype: UTF-8 string
          • image_key [📋]: 180 entries, dtype: int64
          • image_key_control [📋]: 180 entries, dtype: int64
          • x_pixel_size [📋]: scalar entries, dtype: float64
          • y_pixel_size [📋]: scalar entries, dtype: float64
          • name [📋]: scalar entries, dtype: UTF-8 string
          • probe [📋]: scalar entries, dtype: UTF-8 string
          • type [📋]: scalar entries, dtype: UTF-8 string
        • rotation_angle [📋]: 180 entries, dtype: float64
      • definition [📋]: scalar entries, dtype: UTF-8 string

check path of VirtualSources are relative (must start with ‘./’ string):

[42]:
with h5py.File("resources_cfg_create_nxtomo_from_scratch/my_large_nxtomo.nx", mode="r") as h5f:
    dataset = h5f["entry0000/instrument/detector/data"]
    print("dataset is virtual:", dataset.is_virtual)
    for vs_info in dataset.virtual_sources():
        print("file name is", vs_info.file_name)
        assert vs_info.file_name.startswith("./")
dataset is virtual: True
file name is ./external_files/file_0.nx
file name is ./external_files/file_1.nx
file name is ./external_files/file_2.nx
file name is ./external_files/file_3.nx
file name is ./external_files/file_4.nx

clean

[43]:
import shutil
if os.path.exists("resources_cfg_create_nxtomo_from_scratch/my_large_nxtomo.nx"):
    os.remove("resources_cfg_create_nxtomo_from_scratch/my_large_nxtomo.nx")
if os.path.exists("resources_cfg_create_nxtomo_from_scratch"):
    shutil.rmtree("resources_cfg_create_nxtomo_from_scratch")

notes: * you can also provide a list of h5py.VirtualSource to the detector.data attribute. * To append frame to an existing dataset you can also use the `FrameAppender <https://tomotools.gitlab-pages.esrf.fr/nxtomomill/_generated/nxtomomill.utils.frameappender.html>`__ class directly*

Advanced use cases

provide NXtransformations to NXdetector (detector flip, rotation, translation…)

Detector can have image flip (up-down / left-right) created by the acquisition (BLISS-tango) but also some manual flips (rotation along an axis most likely).

To specify those the NXdetector has a TRANSFORMATIONS group defining the transformation chain to be applied. As of today (2023) only image flip are taking into account by nabu (for stitching).

To provide such transformations you can provide a set of transformations like:

[44]:
from nxtomo import NXtomo
my_nxtomo = NXtomo()
[45]:
from nxtomo.utils.transformation import Transformation
my_nxtomo.instrument.detector.transformations.addTransformation(
    Transformation(
        axis_name="rx",    # axis name must be unique
        transformation_type="rotation",
        value=180,         # default unit for rotation is 'degree'
        vector=(1, 0, 0),  # warning: transformation are provided as (x, y, z) which is different of the usual numpy ref used (z, y, x)
    )
)
addTransformation is deprecated. Please us add_transformation

There is several utils class to provide directly detector up-down / left-right flips, basic transformation axis… Please consider using them

[46]:
from nxtomo.utils.transformation import Transformation, UDDetTransformation, LRDetTransformation, TransformationAxis, TransformationType
from nxtomo.nxobject.nxtransformations import NXtransformations

nx_transformations = NXtransformations()
nx_transformations.transformations = (
    UDDetTransformation(),                                  # vertical flip of the detector
    LRDetTransformation(depends_on="ry"),                   # horizontal flip of the detector. Applied after the vertical flip
    Transformation(                                         # some translation over x axis. Applied after the horizontal flip
        axis_name="tx",
        value=0.02,                                         # value can be a scalar - static value - of an array of value (one per frame expected)
        transformation_type=TransformationType.TRANSLATION, # default unit for translation is SI 'meter'
        depends_on="rz",                                    #  Applied after the horizontal flip in the transformation chain
        vector=TransformationAxis.AXIS_X,
    ),
)
my_nxtomo.instrument.detector.transformations = nx_transformations

#### Note: NXtransformations and NXsample:

As of today the NXtomo application is using a set of dataset (x_translation, y_translation, z_translation, rotation_angle to determine the sample transformation). It might evolve in the future to the benefit of an NXtransformations. But this is not implemented at the moment. So please use the ‘original’ datasets for transformations.