Senior Data Engineer
Company: Epsilon Strategy & Insights
Location: Schiller Park
Posted on: March 21, 2026
|
|
|
Job Description:
The Epsilon Activation Delivery Platform team builds the core
framework and connectors powering our audience activation business.
We are a small, high-impact team of engineers operating at massive
scale—trillions of user events flow through our pipelines every
month. By building and maintaining integrations with major
advertising players (Meta, Google, Amazon) and Connected TV
publishers (LG, Samsung), your work directly connects our data
ecosystem to the biggest platforms in the world. Who does this role
report to and who will they collaborate with on a day-to-day basis?
This role reports to the Engineering Director and collaborates
closely with fellow data engineers, product managers, and partner
teams. You will participate in design reviews, code reviews, and
the end-to-end delivery of critical business features in a highly
collaborative environment. Why would a top candidate evaluating
multiple opportunities want this job? Operate at True Scale: Work
in a high-volume data environment where your pipelines will process
and route trillions of records monthly. Work with Modern Data Tech:
Get hands-on with an innovative big data stack using Spark, Scala,
Python, AWS EMR, and Databricks. See Your Impact: Your code will
directly enable our business to interface with the world’s largest
tech, social media, and CTV platforms. Own the Solution: We are a
small team, which means you will have the autonomy to take complex
problems from ingestion and specification all the way through to
production. What You’ll Achieve • Core Contribution: Write robust,
scalable, and maintainable code using Spark (Scala/Python) and SQL
to build out our core framework and data pipelines. • Pioneer
AI-Assisted Engineering: We are on the cutting edge of AI adoption.
You will leverage tools like Cursor and Amazon Q Developer to
accelerate coding, debugging, and testing. Youll also actively
participate in experimenting with the latest developer workflows
for agentic, AI-assisted development, helping define the future of
how our team builds software. • Scale Integrations: Build,
maintain, and optimize the high-concurrency data connectors that
feed external publishers and ad networks. • Optimize &
Troubleshoot: Dive deep into complex data processing routines on
EMR and Databricks. You will actively troubleshoot production
issues, tune SQL queries, and solve for performance bottlenecks in
a distributed environment. • Team Mentorship & Quality: Participate
in rigorous code reviews, help enforce engineering best practices,
and mentor mid-level/junior engineers to elevate the teams overall
codebase. • Continuous Improvement: Build and maintain automated
production processing routines that fit seamlessly into our
existing scheduled cloud infrastructure. Who You Are • What you’ll
bring with you: o B.S. in Computer Science, Computer Engineering,
or a related field. o 5 years of professional experience on a
development team building and maintaining big data pipelines. o
Deep, hands-on expertise in Apache Spark and distributed computing
concepts. o Strong programming proficiency in Scala and/or Python.
o Fluent SQL skills with the ability to ingest complex use cases,
refactor code, and tune queries for massive datasets. o Proven
experience working within cloud environments (AWS preferred) and
managed platforms like Databricks or EMR. o Ability to troubleshoot
production issues autonomously and own a problem to the end. o
Excellent communication skills to interface with internal
stakeholders, ask the right questions, and translate business
requirements into technical solutions. • Why you might stand out
from other talent: o Experience in the AdTech/MarTech space,
specifically programmatic advertising, identity resolution, or
audience activation. o Familiarity with orchestration tools (e.g.,
Airflow). o Experience building and optimizing data integrations
with external REST APIs at high concurrency. o Exposure to
streaming technologies (Kafka) or NoSQL databases.
Keywords: Epsilon Strategy & Insights, South Bend , Senior Data Engineer, Engineering , Schiller Park, Indiana