PRINCIPAL DUTIES AND RESPONSIBILITIES:
Education and Experience:
- Build data pipelines: Architecting, creating, maintaining and optimizing data pipelines is the primary responsibility of the data engineer.
- Drive automation through effective metadata management: automate the most common, repeatable and tedious data preparation and integration tasks, to minimize manual processes and errors and improve productivity. The data engineer also assists with renovating the data management infrastructure to drive automation in data integration and management
- Collaborate across departments: work collaboratively with varied stakeholders (notably data analysts and scientists) to refine their data consumption requirements
- Educate & train be knowledgeable about how to address data topics, including using data & domain understanding to address new data requirements, proposing innovative data ingestion, preparation, integration and operationalization, and training stakeholders in data pipelining & preparation
- Participate in ensuring compliant data use: ensure that data users and consumers use the data provisioned to them responsibly. Work with data governance teams and participate in vetting and promoting content to the curated data catalog for governed reuse
- Become a data and analytics evangelist: The data engineer is a blend of "analytics evangelist", "data guru" and "fixer." This role will promote the available data and analytics capabilities and expertise to business leaders to help them leverage these capabilities in achieving business goals.
- Minimum 3-5 years of relevant work experience in data management including Big Data processing, ETL (Extract, Transform, Load) frameworks, data integration, optimization, and data quality, of which 3+ years supporting data and analytics initiatives for cross-functional teams
- Responsible for designing, building, and testing several complex ETL workflows preferably using Alteryx
- Experienced with SQL/NoSQL databases, structured and unstructured data processing
- Experienced with SQL optimization and performance tuning
- Experienced with Logical and Physical data modeling and exposure to data modeling tools
- Foundational knowledge of various data management architectures like data warehouse, data lake and data hub, and supporting processes like data integration, data governance, data lineage and metadata management
- Experienced with: o Big Data processing tools and frameworks o Data preparation/ETL tools (Alteryx, Informatica, DataStage) o Data Visualization tools (Power BI (Business Intelligence), Tableau) o Working with SQL on Hadoop tools and technologies (HIVE, Impala, Presto) o Agile Development o Public Cloud data environments (AWS (Amazon Web Services), Azure) OR hybrid environments.
- Preferred to have experience on: o Any programming languages (JAVA, Python, Node.js) o DevOps capabilities like version control, automated builds, testing and release management capabilities with Git, Jenkins o CI/CD deployment practices o Certification on Alteryx, Tableau, or similar tools
- Bachelor's degree in STEM or a related technical field, or equivalent work experience
Employee Status : Full Time Employee
Shift : Day Job
Travel : No
Job Posting : Sep 15 2023
Cognizant (Nasdaq-100: CTSH) is one of the world's leading professional services companies, transforming clients' business, operating and technology models for the digital era. Our unique industry-based, consultative approach helps clients envision, build and run more innovative and efficient businesses. Headquartered in the U.S., Cognizant is ranked 185 on the Fortune 500 and is consistently listed among the most admired companies in the world. Learn how Cognizant helps clients lead with digital at www.cognizant.com or follow us @Cognizant.