White Paper
September 18, 2025

ShapeScale: Optical 3D Body Scanning for Body Fat Percentage Estimation

ShapeScale: Optical 3D Body Scanning for Body Fat Percentage Estimation

Abstract

Body fat percentage is a key indicator of health risk, performance, and treatment outcomes, yet existing assessment methods face tradeoffs. DXA provides high accuracy but is costly and exposes participants to radiation, while BIA is inexpensive but often imprecise.

We developed and internally validated ShapeScale, a 3D optical body scanner that predicts whole-body fat percentage directly from body surface geometry using an AI model trained on paired 3D scans and DXA outcomes from 1,000 adults.

Within this training cohort, ShapeScale achieved a mean absolute error of 1.86 percentage points and an R² of 0.90 against DXA, significantly outperforming four-point Bioelectrical Impedance Analysis, BIA, (MAE 4.79, R² 0.35). These findings demonstrate that ShapeScale can deliver DXA-level accuracy while remaining fast, non-ionizing, and practical for widespread use in clinical, fitness, and consumer settings.

Introduction

Accurate body composition assessment is essential for understanding health risks, tracking performance, and evaluating interventions. Body fat percentage, in particular, is linked to cardiovascular disease, type 2 diabetes, and mortality risk [1].

Optical 3D body scanning provides a safe, fast, and repeatable alternative that scales better than traditional approaches. Prior studies often relied on anthropometric surrogates such as circumferences or regression-based estimates such as circumferences, girths, or regression on derived measurements, which fail to leverage the full geometric richness of 3D surface meshes [2], [3], [4].

Here we evaluate the performance of ShapeScale, a mesh-based optical approach, in 1,000 adults. Our analysis focuses on agreement with DXA, prediction accuracy across the full body fat spectrum, and the potential of optical methods as a non-ionizing alternative that may surpass current reference techniques.

Background

DXA (Dual-energy X-ray Absorptiometry). Widely used in body composition research and clinical settings, DXA provides whole-body fat and lean mass estimates but exposes participants to radiation, requires costly equipment, and can misestimate fat in bone-dense or visceral regions [5], [6].

BIA (Bioelectrical Impedance Analysis). Inexpensive and widely available, BIA is fast but influenced by hydration status, electrode placement, and population-specific assumptions, limiting accuracy [7].

Optical Scanning. Optical approaches avoid radiation and can be deployed in a variety of settings, offering a promising balance of safety, speed, and scalability.

ShapeScale Approach

ShapeScale generates watertight, high-density 3D body meshes using optical capture. For analysis, each mesh is pre-processed by removing the head and feet.

An AI model trained on these meshes predicts whole-body fat percentage directly from geometry, without requiring metadata such as age, sex, or ethnicity.

ShapeScale uses structured optical capture to generate a high-density, watertight 3D mesh of the human body in seconds. This process captures the complete surface geometry, hundreds of thousands of points across the torso, arms, and legs.

  1. Mesh Creation. The system builds a precise 3D model of the body surface from multiple optical viewpoints.
  2. Pre-processing. To standardize across users, meshes are normalized by removing the head and feet, ensuring focus on the regions where fat and lean tissue distribution matter most.
  3. AI Prediction. A deep learning model using a spectral graph neural network, trained on paired 3D scans and DXA outcomes, predicts whole-body fat percentage directly from geometry. Unlike other approaches, ShapeScale requires no demographic metadata such as age, sex, or ethnicity, the prediction comes purely from the body shape.

This pipeline enables ShapeScale to convert raw 3D geometry into clinically validated body fat estimates in a matter of seconds.

Participants and Data Collection

We enrolled 1,000 adult volunteers, primarily residents of the San Francisco Bay Area, under written informed consent. Inclusion criteria: age ≥18 years and ability to stand unaided.

  • Demographics: 662 male, 378 female, ages 18–75 years, BMI 15.0–59.3, body fat percentage 5.5–56.8%.
  • Reference method: Whole-body GE Lunar iDXA.
  • Pre-scan protocol: Light clothing, no metal (for DXA), no strenuous morning exercise, fasting ≥2 hours.
  • ShapeScale scans: Three consecutive scans per participant, in a standardized stance, wearing skin-fitting underwear or sportswear(men without a top). Long hair tied up, no wrist jewelry. Scans performed the same day, typically within one hour of DXA.
  • Four-point Bioelectrical Impedance Analysis (BIA) comparator: InBody Dial H20 Smart Body Scale. Two participants excluded (one exceeded weight limit, one wore incompatible heart monitor).
  • Quality control: Trained operators excluded implausible cases (e.g., participants presenting with a large soft abdomen but unusually low DXA fat, or unusually low body fat percentage despite no exercise history and no visible muscle definition).

Results

ShapeScale vs DXA. ShapeScale predictions showed strong agreement with DXA-derived body fat percentage:

  • Mean Absolute Error: 1.86 percentage points
  • Root Mean Square Error: 2.28 percentage points
  • R²: 0.90

Regression analysis (Figure 1) showed tight clustering along the line of identity with no systematic bias across the fat range.

In comparison, four-point Bioelectrical Impedance Analysis (BIA) estimates from the InBody Dial H20 Smart Body scale showed lower accuracy relative to the DXA reference:

  • Mean Absolute Error: 4.79 percentage points
  • Root Mean Square Error: 5.59 percentage points
  • R²: 0.35

BIA consistently underestimated fat percentage across participants (Figure 2).

Two participants were excluded: one exceeded the device’s weight limit and another wore a heart-rate monitor incompatible with Bioimpedance devices. Overall, error distributions showed consistent body fat underestimation across population.

ShapeScale outperformed 4 points Bioelectrical Impedance Analysis **(**BIA) across the full fat range, approaching DXA accuracy while providing a practical alternative that is faster, less costly, and doesn't expose users to radiation.

Limitations

Accuracy depends on mesh quality and adherence to scanning protocol:

  • Clothing: Participants were instructed to wear skin-fitting underwear or sportswear, with no top for men. Deviations from these guidelines (e.g., loose garments, t-shirts on male subjects) can introduce local surface artifacts on the mesh and affect predictions.
  • Hair: Participants with longer hair were instructed to tie their hair on top of the head to avoid covering the neck. Untied hair or ponytail lying along the spine can introduce local surface artifacts that interfere with the body mesh.
  • Posture and stillness: The model assumes a standardized still stance and good posture. Variations such as moving, slouching, bent knees, asymmetric or abnormal arm placement alter surface geometry in ways that may perturb predictions.
  • Skin laxity: In participants with substantial weight loss or age-related skin laxity, folds of redundant tissue can distort the underlying body contour captured by the scanner. While the graph neural network is robust to small local perturbations, extreme cases may reduce accuracy.

Mitigation strategies include operator training, app-based posture reminders, and automated QC checks.

Conclusion

ShapeScale estimates body fat percentage with near-DXA accuracy, strong agreement and high repeatability from a single, non-ionizing scan. The mesh fidelity enables our graph neural network to predict DXA-comparable body fat estimates without expensive equipment and radiation exposure. Unlike BIA, performance is consistent across hydration states, implanted devices, and pregnancy, making it broadly applicable.

By combining the fidelity of optical 3D meshes with AI modeling, ShapeScale offers a safe, scalable solution for:

  • Clinical research: longitudinal, radiation-free monitoring.
  • Medical spas and aesthetic practices: objective tracking of interventions.
  • Fitness and health centers: member engagement and progress visualization.
  • Consumer wellness: at-home, repeatable monitoring.

ShapeScale bridges the gap between accuracy, accessibility, and safety, setting the stage for optical methods to become the new standard in body composition assessment.

References

  1. Jo A., Orlando F., Mainous III A.G. Editorial: Body composition assessment and future disease risk. Frontiers in Family Medicine (2025).
  2. Ng B.K., Hinton B.J., Fan B., Kanaya A.M., Shepherd J.A. Clinical anthropometrics and body composition from 3D whole-body surface scans (2026).
  3. Ng et al. Detailed 3D body shape features predict body composition, blood metabolites, and functional strength: the Shape Up studies. Obesity (2019).
  4. Adler C., Steinbrecher A., Jaeschke L., et al. Validity and reliability of total body volume and relative body fat mass from a 3D photonic body surface scanner. PLoS ONE (2017).
  5. Kim T.N. Use of DXA for body composition in chronic disease management. Clin Physiol Pharmacol (2024).
  6. Tavoian D., Ampomah K., Amano S., et al. Changes in DXA-derived lean mass and MRI-derived cross-sectional area of the thigh. J Cachexia Sarcopenia Muscle (2019).
  7. Iftime A., Scheau C., Babes R.M., Ionescu D., et al. Confounding factors in bioelectrical impedance measurements. Diagnostics (2025).

About the author

Kate Wayenberg is a data scientist specializing in applied machine learning and health technology. Her work focuses on developing and validating predictive models from complex 3D datasets, particularly in the field of body composition analysis. With expertise in statistical modeling, AI, and large-scale data collection, she bridges the gap between algorithm development and clinical application. At ShapeScale, she has contributed to building robust, data-driven methods that transform optical body scans into accurate, clinically relevant metrics.