Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
Unordered list
Bold text
Emphasis
Superscript
Subscript
Body fat percentage is a key indicator of health risk, performance, and treatment outcomes, yet existing assessment methods face tradeoffs. DXA provides high accuracy but is costly and exposes participants to radiation, while BIA is inexpensive but often imprecise.
We developed and internally validated ShapeScale, a 3D optical body scanner that predicts whole-body fat percentage directly from body surface geometry using an AI model trained on paired 3D scans and DXA outcomes from 1,000 adults.
Within this training cohort, ShapeScale achieved a mean absolute error of 1.86 percentage points and an R² of 0.90 against DXA, significantly outperforming four-point Bioelectrical Impedance Analysis, BIA, (MAE 4.79, R² 0.35). These findings demonstrate that ShapeScale can deliver DXA-level accuracy while remaining fast, non-ionizing, and practical for widespread use in clinical, fitness, and consumer settings.
Accurate body composition assessment is essential for understanding health risks, tracking performance, and evaluating interventions. Body fat percentage, in particular, is linked to cardiovascular disease, type 2 diabetes, and mortality risk [1].
Optical 3D body scanning provides a safe, fast, and repeatable alternative that scales better than traditional approaches. Prior studies often relied on anthropometric surrogates such as circumferences or regression-based estimates such as circumferences, girths, or regression on derived measurements, which fail to leverage the full geometric richness of 3D surface meshes [2], [3], [4].
Here we evaluate the performance of ShapeScale, a mesh-based optical approach, in 1,000 adults. Our analysis focuses on agreement with DXA, prediction accuracy across the full body fat spectrum, and the potential of optical methods as a non-ionizing alternative that may surpass current reference techniques.
DXA (Dual-energy X-ray Absorptiometry). Widely used in body composition research and clinical settings, DXA provides whole-body fat and lean mass estimates but exposes participants to radiation, requires costly equipment, and can misestimate fat in bone-dense or visceral regions [5], [6].
BIA (Bioelectrical Impedance Analysis). Inexpensive and widely available, BIA is fast but influenced by hydration status, electrode placement, and population-specific assumptions, limiting accuracy [7].
Optical Scanning. Optical approaches avoid radiation and can be deployed in a variety of settings, offering a promising balance of safety, speed, and scalability.
ShapeScale generates watertight, high-density 3D body meshes using optical capture. For analysis, each mesh is pre-processed by removing the head and feet.
An AI model trained on these meshes predicts whole-body fat percentage directly from geometry, without requiring metadata such as age, sex, or ethnicity.
ShapeScale uses structured optical capture to generate a high-density, watertight 3D mesh of the human body in seconds. This process captures the complete surface geometry, hundreds of thousands of points across the torso, arms, and legs.
This pipeline enables ShapeScale to convert raw 3D geometry into clinically validated body fat estimates in a matter of seconds.
We enrolled 1,000 adult volunteers, primarily residents of the San Francisco Bay Area, under written informed consent. Inclusion criteria: age ≥18 years and ability to stand unaided.
ShapeScale vs DXA. ShapeScale predictions showed strong agreement with DXA-derived body fat percentage:
Regression analysis (Figure 1) showed tight clustering along the line of identity with no systematic bias across the fat range.
In comparison, four-point Bioelectrical Impedance Analysis (BIA) estimates from the InBody Dial H20 Smart Body scale showed lower accuracy relative to the DXA reference:
BIA consistently underestimated fat percentage across participants (Figure 2).
Two participants were excluded: one exceeded the device’s weight limit and another wore a heart-rate monitor incompatible with Bioimpedance devices. Overall, error distributions showed consistent body fat underestimation across population.
ShapeScale outperformed 4 points Bioelectrical Impedance Analysis **(**BIA) across the full fat range, approaching DXA accuracy while providing a practical alternative that is faster, less costly, and doesn't expose users to radiation.
Accuracy depends on mesh quality and adherence to scanning protocol:
Mitigation strategies include operator training, app-based posture reminders, and automated QC checks.
ShapeScale estimates body fat percentage with near-DXA accuracy, strong agreement and high repeatability from a single, non-ionizing scan. The mesh fidelity enables our graph neural network to predict DXA-comparable body fat estimates without expensive equipment and radiation exposure. Unlike BIA, performance is consistent across hydration states, implanted devices, and pregnancy, making it broadly applicable.
By combining the fidelity of optical 3D meshes with AI modeling, ShapeScale offers a safe, scalable solution for:
ShapeScale bridges the gap between accuracy, accessibility, and safety, setting the stage for optical methods to become the new standard in body composition assessment.
Kate Wayenberg is a data scientist specializing in applied machine learning and health technology. Her work focuses on developing and validating predictive models from complex 3D datasets, particularly in the field of body composition analysis. With expertise in statistical modeling, AI, and large-scale data collection, she bridges the gap between algorithm development and clinical application. At ShapeScale, she has contributed to building robust, data-driven methods that transform optical body scans into accurate, clinically relevant metrics.