Many clinicians are beginning to use 3D printing (3DP), a form of additive manufacturing (AM), to make anatomic models for surgical planning, patient education, and more. (1, 2) Over the last decade, traditional manufacturers have used 3DP to fabricate patient-specific medical devices (3). However, the more recent trend is for health care systems to bring 3D printing capabilities within the walls of the hospital, at the point of care. With increased implementation at the point of care, healthcare facilities need to develop methods to ensure these devices are safe and do not increase risk to the patient. In August 2017, FDA discussed several types of activities that could be undertaken at the point of care (4). Common use cases include patient-specific implants, surgical cutting guides, and anatomic models (5). Anatomic models may improve surgical outcomes and provide tactile stimulus during surgical planning (6). Procedures to ensure patient-specific models meet clinical requirements can be developed through in-house experience, but a systematic approach developed from understanding critical to quality attributes and using established techniques can reduce patient risk and increase output consistency.
A basic workflow for generating a 3D printed model from patient anatomy begins with an appropriate patient imaging data set, progresses through several software steps to isolate the anatomy of interest, and then additional steps to convert volumetric segmentation data to surface data (Fig. 1). Best practices for volumetric patient imaging (e.g. computed tomography (CT) and magnetic resonance imaging (MRI)) for designing patient specific models may be different than standard clinical imaging protocols used for diagnosis (1) and are optimized for all the equipment and software being used. Optimal imaging protocols for segmentation typically result in isotropic voxels with a small field of view in the XY plane and thin slices in the Z-axis – with potentially increased noise (7, 8). However, the clinical needs include weighing radiation dose and benefits of additional imaging sequences (9).
Once patient imaging is acquired, software is used to segment a region of interest (ROI), all following a similar generalized workflow. (Fig. 1) Previous investigations into image acquisition and workflow found that the largest influences on model accuracy were insufficient scan quality and manual segmentation for complex soft tissue cases (10). Recent algorithm improvements have increased the availability and degree of segmentation automation. As 3DP moves to point of care and the clinical environment, there is a need for users to validate software processes and measure their agreement with the true anatomy (11). Each intended clinical use requires a specific level of accuracy.
For example, a model created to demonstrate relative anatomy may be functional with rough precision and accuracy, whereas a model intended for the sizing, placement, or templating of an implanted device may require submillimeter accuracy. Final accuracy of the 3D printed model is a function of manufacturing hardware quality control and the variability introduced during the workflow to convert patient scans to a printable format such as standard tessellation language (STL) files. STL surface meshes are the most prevalent digital model files due to their history in computer graphics and their ability to minimize the storage and processing power needed for large volumes. While no standard imaging protocol for medical 3DP existed at the time of writing, most available protocols for FDA approved devices and previous studies acknowledge that image slice thickness and slice interval are primary limiting factors in scan quality. Noise, beam hardening, patient movement, and metal artifacts can all contribute to inhomogeneities in gray values, negatively impacting segmentation precision. Similarly for printing hardware/process output, the maintenance, process controls, and material controls have all been identified as critical factors for accuracy. (12)
This study will focus on identifying key details of the digital workflow in segmentation software that will help a health care facility to evaluate and implement the right solutions for their needs. Programmatic implementations and user options can affect the mesh either cosmetically or substantially – the effects of which are not always immediately evident to the user. This study uses several common programs with different automated approaches to illustrate the kind of variability can exist between software programs and how to test for it. A single program may also have a variety of parameters that could lead to different outputs. We seek to identify and implement metrics to quantify potential geometric variation in 3D models arising from software implementation-associated workflows. Four programs were selected to represent a spectrum of those available, including both FDA cleared and non-cleared software with proprietary or open-source implementations. Most programs operate in a similar fashion, but user control over smoothing options varies between programs. The comparison metrics used here can be extended and repeated with other software programs and workflows not described here.
Background: Representing Medical Imaging as Digital Solids or Segmented Anatomy
The first step in characterizing variability was to understand the underlying algorithms behind segmentation, mesh construction, and mesh smoothing and their typical implementation. Medical image volumes are – at their simplest – blocks of data stacked on top of each other into volumetric pixels or voxels. When regions of interest (ROI) are segmented out and treated as separate datasets from the original imaging data, they are stored as digital solids. There are two ways to store this data. First, as solid volume information, where all the voxels for a ROI are collected and named. While most representative of the patient data, volume models appear visually jagged as calculated only with voxel information. The second method is to create a surface mesh – essentially making a shell mold of the ROI by draping strings over it until they cover the entire region. This “surface mesh” is described in triangles. In the analogy, everywhere the strings intersect it is called a vertex. The line between each vertex is called an edge and then the space between any three nodes is called the face (Fig. 1A). Just like the real object, each face has an inside and an outside, the outside direction is called its “normal”. All of these triangles together form a continuous mesh wrapped around the ROI.
Storing an ROI as a mesh is essential for 3DP as most 3D printing software uses the coordinates of the vertexes and triangles – stored as stereolithography files (STLs) – to tell the printer motors where to move and place material as the model is printed. While there are some limited advancements in printing technology allowing models to be stored with voxel information only (13), these are not the current standard.
Generating a uniform mesh – where triangles are even in size and distribution – is computationally complex but typically makes the digital object more accurate and easier to work with in the software (Fig. 1 Bii). When increasing the resolution on a triangulated mesh, more and smaller triangles are added to better approximate curved features. Dense meshes can be computationally complex and burdensome to work with. However, if the mesh density is not high enough, the solid may not accurately resemble the original object. In a simple case, a marble could be represented with 3 triangles and look like a pyramid or with thousands of triangles looking like a smooth sphere. In addition to misrepresenting a simple solid, areas with high curvature and small features can cause triangles to become very thin or create disconnected areas or holes. A balance must be struck between the complexity of adding triangle density and the need for dimensional fidelity. Many workflows mediate this by first creating very dense original mesh and then reducing the number of triangles and optimizing their locations until an automatic criterion is met based on input parameters or the user manually determines further modification is no longer suitable. Note that reducing the number of triangles does not always mean the fidelity will be reduced (Fig. 1 Bi). Many programs use constant density meshes, where the size of each triangle is relatively the same. Some programs can also produce variable density meshes which will make the triangles smaller in regions that require higher resolution or fidelity. For most standard anatomic modeling applications, constant density meshes can accurately reproduce clinically relevant features.
Because of the complicated organic shapes of anatomic features, many algorithms and methods have been developed to help users identify boundaries between regions in medical images by automating segmentation and then increasing accuracy through refining processes. These processes can affect the final product in both clinically relevant and insignificant ways depending on ROI’s clinically relevant features and how the algorithm works. Knowledge of a few key parameters can help to determine what any software is doing, how it will affect the anatomic model, and ultimately how it will affect patient care and safety.
Background: Algorithms
Automated algorithms are used in multiple steps in the digital pipeline. The two steps with greatest chance to impact final print quality are: automated segmentation and mesh smoothing. Segmenting begins with (14) thresholding to define the contour of the ROI. This selected area will be highlighted, and color coded to visibly “mask” the ROI. Region growing can then be performed to refine the mask over the ROI (15). Various methods exist such as active contours (16, 17) and region competition (18) to semi automatically refine segmentation masks using gray scale comparisons with weighting or probability calculations to decide which voxels to include in the refined contour. After refinement, the next step is to generate a 3D volume - generally implementing a version of the marching cubes algorithm(19). This entails partitioning the mask into individual polygons representing each voxel, then fusing it into a surface. Cube size is generally defined by the voxel size from the scan data and is set to a multiple of the slice thickness.
The newly created meshes can be smoothed and refined, terms that often overlap. Refinement may include any of several methods to increase fidelity of the mesh in specific areas where feature resolution is needed. Smoothing will refer to decreasing noise in contours and flattening of surface features. Segmented meshes made from patient images tend to be large and very complex and can usually be decimated to reduce computational load. Decimation can reduce the number of triangles that make up the mesh – decreasing file size and complexity while ideally preserving topology (20). A summary of the most common smoothing algorithms with notes about implementations is summarized in Table 1.
Most mesh smoothing algorithms function by iterating through the mesh and relocating vertices according to mathematical restrictions to optimize the mesh to a given parameter. The most common smoothing algorithms are implementations of Laplacian smoothing (21, 22), a vertex-based technique that iteratively converges a curve towards a point. Not all smoothing algorithms are made equally. A pure Laplacian implementation does not correct for mesh shrinkage, but modifications such as Taubin smoothing (23) include a compensating inflation step after each mesh shrinkage step. Similar methods like Angle Based (24), Bilateral (25), and Curvature (26) have been implemented to optimize smoothing in a manner that preserves details (sharp points, small radius curvatures, or thin walls) in the mesh. Most programs include user-selectable options to preserve small features and boundaries.
Methods of optimally smoothing a mesh have been an important topic for a long time and new implementations and optimization corrections are constantly being introduced. Open source and proprietary implementations can provide different options and benefits to the user, each with accompanying compromises. With open-source software, the user can directly view and sometimes modify the implementation but may sacrifice a user-friendly interface or extensive validation. Proprietary software programs generally include well-developed user interfaces but do not always publish complete descriptions of their algorithms, giving users less visibility and control over algorithmic behavior. However, that limited flexibility is often accompanied by additional software validation.
It is incumbent on the user to identify the characteristics of their software and determine which are most important for their application. Here we use four commonly available segmentation programs to show differences that can arise from their implementation of algorithms. Then we identify several methods of comparing algorithms, locating errors, and maintaining quality for clinical processes.