Peter Luro
I build systems that actually hold up in production - financial data pipelines, reporting workflows, and trading tools that need to run reliably and on time.
At Talcott Resolution, I work on data pipelines and reporting systems used by actuarial and hedging teams. A lot of the work is making sure large, messy datasets (XML, JSON, SQL) get processed correctly and consistently, and that downstream reports are accurate and delivered when they’re expected. I’ve spent a lot of time dealing with edge cases, fixing data issues, and improving reliability across these workflows.
Most of what I build is in Python, with Oracle/SQL Server on the backend and Azure handling compute and orchestration. I’ve also worked on APIs that trigger processing jobs and generate reports on demand. Outside of work, I’ve been building out trading and market analysis tools. This includes pulling market data, testing strategies, and automating parts of the decision process. It’s been a good way to apply what I do professionally to something more open-ended.
At Talcott Resolution, I work on data pipelines and reporting systems used by actuarial and hedging teams. A lot of the work is making sure large, messy datasets (XML, JSON, SQL) get processed correctly and consistently, and that downstream reports are accurate and delivered when they’re expected. I’ve spent a lot of time dealing with edge cases, fixing data issues, and improving reliability across these workflows.
Most of what I build is in Python, with Oracle/SQL Server on the backend and Azure handling compute and orchestration. I’ve also worked on APIs that trigger processing jobs and generate reports on demand. Outside of work, I’ve been building out trading and market analysis tools. This includes pulling market data, testing strategies, and automating parts of the decision process. It’s been a good way to apply what I do professionally to something more open-ended.
Open To
Current Focus
- Building Azure Container App services that give actuaries on-demand XLSX report generation without touching batch infrastructure
- Configuring Autosys job scheduling for production workflows with structured error handling and dependency chains
- Designing and implementing file loading strategies into a SQL Server data lake, handling complex source formats and schema variation at ingestion
- Provisioning Azure resources and pipelines to support nightly hedging batch processes ensuring reliable, environment-aware execution across DEV/QA/PTE/PROD
Work Highlights
Full details on the Projects page.
Financial Data Validation Platform
Python pipelines that detect schema drift, stale data, and identifier mismatches across Oracle and SQL Server before they reach downstream risk workflows.
Azure Ingestion Pipelines
Containerized jobs with retry logic, structured logging, and environment-aware execution for processing large financial datasets at scale.
Trading Execution System
Personal project: ingests intraday market data, evaluates EMA and breakout conditions, and submits orders through Alpaca's API with PDT compliance built in.
Location
Open to opportunities in:
- Boston
- Florida
- Utah
- Texas
- Remote