Back to Jobs
Data Engineer with a winning track record in Big Data and Distributed Systems
We are looking for team members that love new challenges, cracking tough problems and working cross-functionally. If you are looking to join a fast-paced, innovative and incredibly fun team, then we encourage you to apply.
Role: Senior Data Engineer
Location: Woodland Hills, CA
Duration: 18+ Months
We’re using data in groundbreaking ways to uncover customer insights, personalize customer experiences, and provide a unified customer view across all products.
Work in the Risk Data Analytics data engineering team. The team has 13 engineers working on Risk Management, Fraud Prevention, HADOOP data pipelines, Data Warehousing (DW), and Business Intelligence (BI) solutions.
Work closely with the Risk Decision Science team to design, build, deploy and operate their data science, data analytics, data warehouse and BI solutions.
Work in fast-moving development team using agile methodologies.
Partner closely with Data Scientists, BI developers and Product Managers to design and implement data models, database schemas, data structures, and processing logic to support various data science, analytics, machine learning, and BI workflows.
Design and develop ETL (extract-transform-load) processes to validate and transform data, calculate metrics and attributes, populate data models etc., using HADOOP, Spark, SQL, and other technologies.
Lead by example, demonstrating best practices for code development and optimization, unit testing, CI/CD, performance testing, capacity planning, documentation, monitoring, alerting, and incident response in order to ensure data availability, data quality, usability and required performance.
Define SLAs for data availability and correctness. Automate data availability and quality monitoring and alerts. Respond to alerts when SLAs are not being met.
Communicate progress across organizations and levels from individual contributor to executive. Identify and clarify the critical few issues that need action and drive appropriate decisions and actions. Communicate results clearly and in actionable form.
Demonstrate commitment to your professional development by attending conferences, taking classes, and participating in developer communities inside and outside.
MS in Computer Science, Mathematics, or a similar field.
Familiarity and experience with AWS services.
Advanced programming skills in both Python and Java. Familiarity with R.
Strong Dimensional Modelling Skills on HADOOP or MPP (e.g. Vertica, Redshift).
6+ years of experience integrating technical processes and business outcomes – specifically: data architecture and models, data and process analysis, data quality metrics / monitoring, developing policies / standards & supporting processes.
6+ years of hands-on data engineering experience.
2+ years DevOps experience including configuration, optimization, backup, high reliability, monitoring and systems version control.
2+ years of experience building and operating scalable and reliable data pipelines based on Big Data processing technologies like Hadoop, MR, Spark or ETL on MPPs
2+ years of hands-on experience with analytics, building and operating DW on HADOOP, Vertica or Redshift MPP.
Demonstrated ability to work in a matrix environment, ability to influence at all levels, and build strong relationships.
Knowledge of enacting service level agreements and the appropriate escalation and communication plans to maintain them.
BS in Computer Science, Mathematics, or a similar field.
Object Oriented programming skills in Python and Java, and a willingness to learn other languages (e.g. R) as needed.
Familiarity with database fundamentals including SQL, schema design, and performance tuning.
Functional understanding of HADOOP-based technologies and systems, including Hive QL, MR, and Spark SQL.
4+ years of experience integrating technical processes and business outcomes – specifically: data architecture and models, data and process analysis, data quality metrics / monitoring, developing policies / standards & supporting processes.
4+ years of hands-on data engineering experience.
2+ years DevOps experience including configuration, monitoring and version control.
Record of accomplishment working with data from multiple sources – willingness to dig-in and understand the data and to leverage creative thinking and problem solving.
Excellent interpersonal and communication skills, including business writing and presentations. Ability to communicate objectives, plans, status and results clearly, focusing on critical few key points.
510.795.4800 X 159