Overview
Figure AI is a US robotics company founded in 2022 by Brett Adcock, focused on developing AI-powered general-purpose humanoid robots. Led by serial entrepreneur Adcock, who founded Archer Aviation and Vettery, Figure AI has rapidly emerged as a leader in the humanoid market. 1
| Item | Content |
|---|---|
| Company | Figure AI, Inc. |
| Founder | Brett Adcock (CEO) |
| Founded | 2022 |
| Headquarters | Sunnyvale, California |
| VLA Model | Helix (Vision-Language-Action model) |
| Target Market | Industrial (manufacturing, logistics), Home |
Key Significance
Figure AI humanoids have significant importance in the Physical AI field:
- First Commercial Humanoid Manufacturing Deployment (Figure AI claim): General-purpose humanoid deployed on actual production line at BMW factory 2
- Vertical Integration AI Strategy: Transitioned from OpenAI partnership to in-house Helix VLA development (February 2025) 3
- High-Performance Hardware-Software Integration: Combination of Helix VLA capable of 200Hz full-body control with optimized hardware 4
- Aggressive Cost Reduction Target: Targeting 90% parts cost reduction in Figure 03, aiming for under $20,000 at mass production 5
- Large-Scale Production Infrastructure: BotQ factory targeting 12,000 units annually (announced March 15, 2025) 6
- Proven Industrial Performance: Contributed to production of over 30,000 X3s at BMW factory 2
Generational Comparison
| Item | Figure 01 (2023) | Figure 02 (2024) | Figure 03 (2025) |
|---|---|---|---|
| Purpose | Prototype | Industrial pilot | Commercial mass production |
| Height | 168cm (5’6”) | 168cm (5’6”) | 168cm (5’6”) |
| Weight | 60kg (132 lbs) | 70kg (154 lbs) | 60kg (-14%) |
| Hand DoF | Basic gripper | 16 DoF (both hands) | Improved design |
| Total DoF | 24 DoF | 35 DoF | 35+ DoF |
| Hand Payload | 20kg | 20kg | 25kg |
| Cameras | Basic | 6x RGB | 8x (including 2 palm) |
| Battery | Integrated (5 hours) | 2.25 kWh (5 hours) | 2.3 kWh (5 hours) |
| Charging | Wired | Wired | Wireless induction (2kW) |
| Computing | Basic | NVIDIA RTX dual | Dual GPU (S1/S2) |
| Manufacturing | Handmade | CNC machining | Die-casting/injection molding |
| Target Price | - | ~$30K (estimated) | Under $20K (target) |
Source: Wikipedia, Figure AI official announcements
Figure 01
Figure AI’s first humanoid robot officially announced on March 2, 2023. Took its first steps in May 2023. 1
Physical Specifications
| Item | Spec |
|---|---|
| Height | 168cm (5’6”) |
| Weight | 60kg (132 lbs) |
| Payload | Up to 20kg (44 lbs) |
| Degrees of Freedom | 24 |
| Operating Frequency | 200 Hz |
| Operating Time | Up to 5 hours |
| Walking Speed | 1.2 m/s |
Key Features
- Bipedal Robot: Targeting logistics and warehouse operations
- Basic Manipulation Capabilities: Climbing stairs, lifting boxes, using tools
- OpenAI Integration: Voice conversation and reasoning capabilities using large language models
- Human-Level Dexterity: Capable of performing tasks requiring precision and coordination
- Torque-Controlled Walking: Adapts to uneven terrain and external disturbances
Figure 02
2nd generation humanoid announced August 6, 2024, marking the serious stage for industrial deployment. 1
Physical Specifications
| Item | Spec |
|---|---|
| Height | 168cm (5’6”) |
| Weight | 70kg (154 lbs) |
| Payload | Up to 20kg |
| Walking Speed | 1.2 m/s |
| Battery | 2.25 kWh (custom) |
| Operating Time | Over 5 hours |
Key Features
| Item | Spec |
|---|---|
| Hand DoF | 16 (5-finger both hands) |
| Total DoF | 35 |
| Hand Payload | Up to 25kg |
| Torque | Up to 150Nm |
| Range of Motion | Up to 195 degrees |
| Cameras | 6x RGB |
| Computing | Dual NVIDIA RTX GPU (3x previous) |
| Sensors | RGB cameras, IMU (Inertial Measurement Unit), gyroscope, force sensors, non-contact sensing, microphones, speakers |
Major Improvements
- Cable integrated design in limbs (sleek matte black exterior)
- Battery integrated in torso (50% more energy than previous)
- Onboard VLM (Vision Language Model)
- Real-time perception, decision-making, execution capable
- Speech-to-speech conversation via OpenAI models
Figure 03
3rd generation humanoid announced October 9, 2025, mass production model targeting both home and commercial use. Initial partner deployments began the same day. 5
Physical Specifications
| Item | Spec |
|---|---|
| Height | 168cm (5’6”) |
| Weight | 60kg (Figure 02 -14%) |
| Battery | 2.3 kWh (custom, 78% cost reduction vs Figure 02) |
| Operating Time | About 5 hours (300 minutes) |
| Charging | Wireless induction 2kW (foot coils) |
| Data Transfer | 10 Gbps mmWave |
Sensor System
- Cameras: 8 (6 main + 2 palm)
- Tactile Sensors: Custom sensors on fingertips (sensing down to 3g - paper clip weight)
- Vision System: 2x frame rate, 1/4 latency, 60% wider FOV
Audio System
- Speakers: 2x size, 4x output
- Microphones: Repositioned for improved clarity
- Real-time speech-to-speech conversation support
Safety Features
- Multi-density foam for pinch point protection
- Soft textile exterior (instead of hard metal)
- UN38.3 certified battery (BMS, cell, interconnect, pack level protection)
Manufacturing Innovation
- Manufacturing Method Transition: CNC machining -> die-casting, injection molding, stamping
- Parts Cost Reduction Target: 90% reduction
- Target Price: Under $20,000 (at mass production)
- Wrist Redesign: Lessons learned from BMW deployment reflected, distribution board and dynamic cabling removed
Helix VLA
Helix is Figure AI’s in-house developed VLA (Vision-Language-Action) model announced February 2025. Figure AI introduces Helix as “the first VLA for high-speed continuous control of a humanoid’s full body.” 4
Architecture
System 1 / System 2 Dual System Structure:
| System | Role | Frequency | Parameters |
|---|---|---|---|
| System 2 (S2) | High-level planning, VLM | 7-9 Hz | 7B |
| System 1 (S1) | Low-level control, real-time | 200 Hz | 80M |
Key Features
- 35 DoF Control: Full upper body control including fingers, wrists, torso, head
- Figure AI’s Claimed Firsts:
- First VLA for high-speed continuous humanoid full-body control
- First VLA for dual robot simultaneous control
- First VLA running fully onboard on embedded low-power GPU
- Learning Efficiency: Trained on approximately 500 hours of teleoperation demos
- Versatility: No task-specific adaptation needed, single weights for various tasks
Representative Performance: Table-to-Dishwasher
| Item | Value |
|---|---|
| Distance Traveled | 130+ feet |
| Unique Interactions | 33 |
| Objects Handled | 21 (including delicate dishware) |
According to Figure AI, evaluated as “the most complex task ever performed autonomously by a robot.” 4
Industrial Deployment
BMW Factory Pilot (2024-2025)
According to Figure AI, this is the first commercial deployment of a general-purpose humanoid in an automotive production facility. 2 7
| Item | Content |
|---|---|
| Location | BMW Spartanburg Plant (South Carolina) |
| Duration | 11 months |
| Robots Deployed | 2 Figure 02 units |
| Work Hours | 5 days/week, 10-hour shifts |
| Total Operating Hours | 1,250 hours |
Results
- Parts Processed: Over 90,000 sheet metal parts loaded
- Production Contribution: Contributed to over 30,000 BMW X3 production
- Task Description: Pick sheet metal parts from racks/bins and place on welding equipment
- Total Distance Traveled: Approximately 200 miles (322km)
- Accuracy: Over 99%
Performance Requirements
| Item | Standard |
|---|---|
| Placement Accuracy | Within 5mm tolerance |
| Single Motion Time | 2 seconds |
| Part Loading Time | 37 seconds |
| Full Cycle Time | 84 seconds |
| Success Rate Target | 99% per shift |
Lessons Learned
Key lessons from BMW deployment reflected in Figure 03 design:
- Forearm Issues: Most frequent hardware failure point -> Wrist electronics completely redesigned in Figure 03
- Thermal Management: Issues due to tight packaging and agility requirements -> Distribution board and dynamic cabling removed
Funding and Valuation
Investment History
| Period | Round | Amount | Valuation | Key Investors |
|---|---|---|---|---|
| 2023.05 | Series A | ~$70M | - | Led by Parkway Venture Capital |
| 2024.02 | Series B | $675M | $2.6B | Jeff Bezos, Microsoft, NVIDIA, Intel Capital, Amazon Industrial Innovation Fund, OpenAI Startup Fund, ARK Invest, Align Ventures |
| 2025.09 | Series C | $1B+ | $39B | Led by Parkway Venture Capital, Brookfield, NVIDIA, Macquarie Capital, Intel Capital, Align Ventures, LG Technology Ventures, Salesforce, T-Mobile Ventures, Qualcomm Ventures |
Source: Wikipedia, PRNewswire
Cumulative Funding: Approximately $2B (Series A + B + C)
OpenAI Partnership
- February 2024: Collaboration agreement with OpenAI, joint development of next-generation AI models for humanoids 8
- February 2025: Partnership ended, transitioned to in-house AI development 3
- Reason: “To solve Embodied AI at scale in the real world, you need to vertically integrate robot AI” - Brett Adcock
Future Plans
- BotQ Factory: Announced March 15, 2025, targeting 12,000 units annually as the largest humanoid factory in the US 6
- 100,000 units within 4 years plan (roadmap)
- Home Market Entry: Consumer market entry projected for 2026, Figure 03’s household helper role (laundry, cleaning, dishwasher, etc.) 5
- Figure 02 Retirement: Fleet-wide retirement of Figure 02 underway following Figure 03 release
Glossary
| Term | Description |
|---|---|
| VLA | Vision-Language-Action model. AI model integrating visual input, language understanding, and action output |
| VLM | Vision-Language Model. AI model that understands images and text together |
| DoF | Degrees of Freedom. Number of independent directions a robot can move |
| IMU | Inertial Measurement Unit. Sensor measuring acceleration and angular velocity |
| FOV | Field of View. Range a camera can see |
References
Additional References: