We're hiring

Join us in building the next small thing.

Full-time Engineering Los Angeles · In-person
+

The role

You will build the compression technology at the core of The Compression Company. You'll design and train neural compression models, improve compression ratio while preserving quality and downstream utility, and make our training system fast and repeatable so we can produce SOTA codecs for new datasets with minimal manual effort. Your work will directly drive product performance and the pace at which we can ship into new customer environments.

You might be a fit if

  • You've trained and shipped ML models end-to-end
  • You care about reliability, performance, and clean experimentation
  • You're strong in PyTorch and comfortable owning training + evaluation
  • You enjoy turning research work into usable systems

Sample projects

  • Improving rate/quality performance for real customer datasets
  • Building automated training runs that explore model + training settings and select strong candidates
  • Extending our evaluation suite to reflect real user needs, not just offline metrics
  • Improving reproducibility and speed across the training + benchmarking loop

Compensation

Salary: $180k–$240k
Equity: Founding equity with meaningful ownership
Full relocation & visa support
Full-time Engineering Los Angeles · In-person
+

The role

You will own how our codecs run in real systems. You'll take trained models and turn them into fast, reliable encoder/decoder artifacts across edge devices and cloud GPUs. You'll build and maintain our compilation and release pipeline, improve runtime performance, and make deployment predictable and repeatable. You'll also own the cloud-side infrastructure that supports this workflow (build, artifact storage, versioning, and reproducible environments). This role is central to shipping production integrations and scaling deployments across customers.

You might be a fit if

  • You've shipped ML inference systems into production
  • You've worked with GPU runtimes (TensorRT, CUDA, ONNX Runtime, or similar)
  • You can profile and optimise latency, memory, and throughput
  • You're comfortable owning AWS-based infrastructure for builds, artifacts, and delivery workflows

Sample projects

  • Hardening ONNX → TensorRT export and compilation for repeatable builds
  • Profiling and optimising encoder/decoder performance on constrained hardware
  • Building a clean "model → artifact → release" pipeline with strong versioning and provenance
  • Maintaining the AWS infrastructure that supports builds, artifact registries, and customer delivery

Compensation

Salary: $180k–$240k
Equity: Founding equity with meaningful ownership
Full relocation & visa support
Full-time Engineering Los Angeles · In-person
+

The role

You will lead deployments of our compression system into real EO workflows. This role spans both sides of the product: making encoding work reliably on constrained hardware (including onboard satellite-class systems), and validating that compressed data remains useful for downstream processing and ML pipelines. You'll work directly with customers and partners, translate operational realities into platform requirements, and help turn early deployments into a repeatable motion. We intend to hire two people into this role profile, with complementary strengths across edge deployment and EO analytics.

You might be a fit if

  • You've shipped EO or imaging ML systems into production, including onboard or edge environments
  • You can own integration, debugging, and validation end-to-end across hardware + software
  • You've worked with constrained deployments (compute, power, bandwidth, reliability) and performance tradeoffs
  • You're comfortable working with customers and shaping technical requirements

Sample projects

  • Deploying and hardening the encoder on Jetson / satellite-class hardware
  • Integrating our codec into customer pipelines and validating analytics and model utility
  • Defining EO evaluation suites that match operational acceptance criteria
  • Building deployment playbooks and monitoring hooks that make rollouts repeatable

Compensation

Salary: $180k–$240k
Equity: Founding equity with meaningful ownership
Full relocation & visa support

About us


We build codecs that radically reduce the cost of moving and storing high-volume data — without destroying the information people rely on. Tailored to specific datasets, shipped as production-ready artifacts that run on real hardware, from edge devices to cloud GPUs.

Small team, high ownership, strong bias toward shipping.

Our hiring process


If it looks like there may be a fit, we'll start with a short conversation with our CEO. Next you'll have a technical discussion with our CTO focused on your prior work and how you approach hard engineering problems. Finally, we'll invite you onsite to work on a small project and spend time with the team.

General inquiries

+ GET IN TOUCH