Data Scientist III
Data Scientist Job 6 miles from Union
Glocomms has partnered with a global tech organization dedicated to transforming lives through spoken-word entertainment. They work with top creators to produce and share audio stories with millions of listeners worldwide. In this role, you will lead the core playback logic from start to finish, ensuring the best playback experience across Android, iOS, and Web platforms. This includes managing content delivery infrastructure, security, digital rights management, and real-time processing of listening data. You'll also spearhead initiatives to advance the state of the art in Internet Audio
Team Overview
Our data science team partners with marketing, content, product, and technology teams to solve challenges using advanced ML, DL, and NLP techniques. We operate in an agile environment, managing the lifecycle of research and model development.
Responsibilities
Optimize customer interactions with validated models
Create data engineering pipelines
Innovate with cutting-edge applications
Develop data visualizations
Collaborate with data scientists, ML experts, and engineers
Share ideas and learn from the team
Basic Qualifications
Experience in modeling and research design
MS in a quantitative field or PhD with relevant experience
Proficiency in SQL and Python
Experience with AWS and Big Data Engineering
Agile Software Development experience
Preferred Qualifications
Expertise in paid and organic marketing
Experience with container platforms and streaming data
Proficiency in R, RShiny, and Scala
Data Scientist
Data Scientist Job 17 miles from Union
We are building a world class systematic data platform which will power the next generation of our systematic portfolio engines.
The systematic data group is looking for a Data Scientist to join our growing team. The team consists of content specialists, data scientists, analysts and engineers who are responsible for discovering, maintaining and analyzing sources of alpha for our portfolio managers.
This is an opportunity for individuals who are passionate about quantitative investing. The role builds on individual's knowledge and skills in four key areas of quantitative investing: data, statistics, technology and financial markets.
Principal Responsibilities
Research potential alpha sources, and present to portfolio managers and quantitative analysts
Utilize and maintain world-class data processing and transformation engines
Build technology tools to acquire and tag datasets
Engage with vendors, brokers to understand characteristics of datasets
Interact with portfolio managers and quantitative analysts to understand their use cases and recommend datasets to help maximize their profitability
Analyze datasets to generate key descriptive statistics
Qualifications/Skills Required
Ph.D. or Masters in computer science, mathematics, statistics or other field requiring quantitative analysis
3 + years of financial industry experience preferred
Programming expertise in Python, C++, Java or C#
Programming skills in SQL, PL-SQL or T-SQL
Strong problem-solving skills
Strong communication skills
NEW YORK ONLY BASED "NO RELOCATION OR REMOTE PLEASE DO NOT APPLY OTHERWISE " Senior Data Scientist - Generative AI - Agents and LLM developer
Data Scientist Job 17 miles from Union
Scalata is an AI-driven finance automation platform located in New York, NY. Our vision is to democratize investment data using generative AI to empower credit decision making. We aim to drive global economic growth, foster financial inclusion, and create a positive ripple effect that empowers businesses, communities, and economies worldwide.
NEW YORK ONLY BASED "NO RELOCATION OR REMOTE PLEASE DO NOT APPLY OTHERWISE " Senior Data Scientist - Generative AI - Agents and LLM developer
Role Description
This is a full-time on-site role for a Data Scientist specializing in Machine Learning and Artificial Intelligence with a focus on Natural Language Processing/Understanding. The Data Scientist will be responsible for tasks such as data analysis, statistical modeling, data visualization, and implementing AI models to automate the credit lifecycle.
Qualifications
Data Science
Data Analytics and Data Analysis skills
Experience in Data Visualization
Strong programming skills in Python, PySpark
Extensive knowledge of Machine Learning and Artificial Intelligence algorithms
Highly skilled development of Agents using Langchain or other frameworks and chatbot management software development
Practical knowledge and experience in building large language models
Experience with Natural Language Processing
Strong problem-solving and critical thinking abilities
Bachelor's or Master's degree in Computer Science, Statistics, Data Science, or related field
Lead Data Scientist
Data Scientist Job 17 miles from Union
About the Job:
As a Lead Data Scientist, you will transform complex data into actionable insights using advanced quantitative methods. You will analyze and interpret large datasets to inform business decisions, and craft compelling presentations to drive business strategy and optimize customer engagement.
A Day in the Life:
Provide actionable insights to stakeholders through data analysis, research, and modeling.
Integrate and manage internal and external data sources, including structured and unstructured data.
Develop and apply advanced quantitative methods, including statistical models and machine learning techniques.
Analyze customer behavior and market trends to inform business strategies and optimize ROI.
Prepare and deliver clear, insightful presentations to various audiences.
Conduct multi-channel analysis, leveraging online and offline data to drive business decisions.
Mentor and manage junior data scientists, guiding workflow and development.
What You Will Need:
Bachelor's degree in Mathematics, Statistics, Economics, Computer Science, Engineering, or other natural sciences (Master's degree preferred).
5+ years of experience in open-source programming, applied statistics, and machine learning.
Proven expertise in relational database management, data wrangling, and statistical analysis.
Excellent communication and presentation skills, with experience presenting to upper management.
Strong project and team management skills, with ability to manage workflow of 1-3 data scientists or analysts.
Experience integrating data scientific modeling into data visualization platforms (e.g., Tableau, PowerBI).
Ability to handle multiple projects, prioritize tasks, and adapt to changing requirements in a fast-paced environment.
Our Global Benefits:
My Time Off (MTO) - our flexible approach to time off that allows you to take the time you need and enjoy it!
Career Progression - we offer personalized development opportunities and clear career pathways.
Health and wellbeing programs that provide you access to different services and offerings to prioritize your health.
Company Savings Plans to help you plan for the future.
Parental Leave benefits for all new parents.
Salary
$110,000 - $145,000 annually.
The salary range for this position is noted within this job posting. Where an employee or prospective employee is paid within this range will depend on, among other factors, actual ranges for current/former employees in the subject position; market considerations; budgetary considerations; tenure and standing with the company (applicable to current employees); as well as the employee's/applicant's background, pertinent experience, and qualifications.
About 90North
*************************************
At 90NORTH, a software intelligence company, we harness proprietary software products to uncover unexpected insights and solve complex business problems. Our diverse team of solution architects, comprising social scientists, medical experts, business strategists, and creative technical designers, collaborate to help clients navigate the challenges of exploding data and AI paradigm shifts. We empower informed decision-making and drive impact through our innovative software products, including Intention Decoder and Hate Audit, which reveal the underlying motivations and sentiments behind human behavior.
For U.S. Job Seekers
It is the policy of IPG Health and any of its affiliates to provide equal employment opportunities to all employees and applicants for employment without regard to race, religion, color, ethnic origin, gender, gender identity, age, marital status, veteran status, sexual orientation, disability, or any other basis prohibited by applicable federal, state, or local law. EOE/AA/M/D/V/F.
Data Engineer
Data Scientist Job 17 miles from Union
Salary: $125 - 200k Base + Equity & Benefits
An AI-Powered Startup providing a system of intelligence for life sciences. They're building deep integrations across pharma's data systems to build a layer of artificial intelligence to power broad decision making.
The goal is to index the entire universe of pharma data, empower hundred-step AI agents that will drive the next generation of pharma decision-making, and directly impact the rate of progress in both AI and human health.
They're partnering with some of the biggest names in pharma and have strong early traction with a healthy pipeline to grow into.
Opportunity
Data is the lifeblood that powers pharma intelligence, insights, and decision-making. We need to stay up to date on the universe of events happening in pharma and as a Data Engineer you will own and lead data engineering projects including web crawling, data ingestion, data modeling, and search.
You will design and build robust, scalable data pipelines to support our AI models.
You'll manage databases with focus on optimizing data storage and retrieval, ensuring speed and efficiency.
You'll also lead the way to designing data models that best support our users' experience and workflows.
What we're looking for..
Experience with Python, Postgres SQL, and AWS
A strong track record of building production ready / robust Data Pipelines
Experience with multimodal processing of data, as well as chunking, embedding, and vector databases.
Prior experience in a Startup environment
Sound of interest? - Get in touch!
Senior Data Engineer
Data Scientist Job 17 miles from Union
Our large digital transformation company is seeking a talented and experienced Data Engineer to join our team, specializing in Databricks and the financial industry. As a Data Engineer, you will play a crucial role in designing, developing, and maintaining data pipelines and infrastructure through data bricks to support financial clients' data-driven initiatives.
Key Responsibilities Include:
- Architect and design end-to-end data solutions using Databricks to process and analyze large-scale financial data.
- Collaborate with stakeholders to understand business requirements and translate them into technical solutions.
- Develop and implement data integration, transformation, and validation processes to ensure data accuracy and consistency.
- Lead the development and optimization of ETL pipelines to ingest data from various sources into data lakes and warehouses.
- Ensure the reliability, performance, and scalability of data architectures.
Must haves:
7+ years in data architecture helping to design solutions within financial industry
Strong proficiency in SQL, Python, and Spark
Deep understanding of data warehousing concepts and technologies (e.g., Snowflake, Redshift).
Databricks Certified
Senior Data Engineer
Data Scientist Job 17 miles from Union
Job Title: Data Engineer (Databricks, ETL, Data & AI Platforms)
Job Type: Contract (40 hours a week)
We are working with a financial services firm that specializes in providing insights and analytics for banks, lenders, and other financial institutions. Their core focus is helping clients optimize their pricing, profitability, and customer engagement strategies through advanced data analysis and market intelligence
We are looking for a skilled and motivated Data Engineer with 3 years of expertise in Databricks, ETL processes, and building scalable data and AI platforms. This role is pivotal in supporting the migration of products and will involve designing, implementing, and optimizing data pipelines to ensure seamless data integration across various systems. The ideal candidate will be passionate about leveraging cutting-edge technology to build robust and efficient data systems to power business intelligence, analytics, and AI-driven initiatives.
Key Responsibilities:
Databricks Development: Design, build, and optimize scalable data pipelines using Databricks to process large datasets and integrate various data sources into a unified data platform.
ETL Pipelines: Develop and manage ETL processes to ingest, transform, and load data from diverse sources (on-premise and cloud-based) into data lakes or data warehouses. Ensure data quality, consistency, and integrity throughout the entire pipeline.
Platform Development: Build and maintain data and AI platforms, ensuring that they are secure, efficient, and capable of handling high volumes of data. Collaborate closely with AI/ML teams to enable seamless integration of models into production systems.
Product Migration Support: Assist in the migration of legacy systems and products to modern cloud-based data solutions. Ensure smooth data transfer, transformation, and system integration during the migration process.
Data Modeling: Collaborate with data scientists, analysts, and business stakeholders to design and implement appropriate data models for business intelligence and machine learning workloads.
Optimization and Monitoring: Optimize performance of data pipelines and platforms for speed and cost-effectiveness. Continuously monitor the health of data systems, troubleshoot issues, and implement improvements.
Collaboration and Documentation: Work closely with cross-functional teams including data scientists, DevOps, and product managers to ensure alignment with business needs and best practices. Maintain comprehensive documentation for all data engineering processes, pipelines, and systems.
Security and Compliance: Ensure all data solutions meet security, privacy, and compliance standards. Implement proper access controls and data governance measures.
Key Skills & Qualifications:
Databricks: Proven experience working with Databricks, including developing and managing notebooks, jobs, and clusters, as well as leveraging Spark for distributed data processing.
ETL Tools and Frameworks: Strong experience with ETL technologies (e.g., Apache Airflow, AWS Glue, or Azure Data Factory), and proficiency in building end-to-end data pipelines.
Data Integration: Expertise in integrating structured and unstructured data from multiple sources, including relational databases, APIs, and cloud-based data sources.
Cloud Technologies: Experience with cloud platforms such as AWS, Azure, or Google Cloud, including data storage services (e.g., S3, Blob Storage, BigQuery) and compute services.
Data Warehousing: Experience with modern data warehousing solutions such as Snowflake, Redshift, or BigQuery.
Programming Languages: Proficient in Python, Scala, or Java for building data pipelines, along with knowledge of SQL for querying and managing relational databases.
AI/ML Collaboration: Familiarity with data science and machine learning concepts, with experience enabling AI workflows on data platforms.
Problem-Solving: Strong troubleshooting, debugging, and performance optimization skills.
Communication: Excellent communication and collaboration skills, able to work effectively with stakeholders at all levels of the organization.
Preferred Qualifications:
Experience with automated deployment pipelines and CI/CD in data engineering workflows.
Familiarity with data governance tools and frameworks (e.g., Apache Atlas, AWS Lake Formation).
Experience with containerization technologies such as Docker and Kubernetes.
Knowledge of data security principles and best practices in a cloud-based environment.
Data Engineer
Data Scientist Job 17 miles from Union
Key Responsibilities:
Collaborate with various management teams to ensure proper integration of functions to achieve goals and identify necessary system enhancements for new products and process improvements.
Address a variety of high-impact problems/projects through detailed evaluation of complex business and system processes, as well as industry standards.
Provide expertise in applications programming and ensure that application design aligns with the overall architecture blueprint.
Utilize advanced knowledge of system flow to develop standards for coding, testing, debugging, and implementation.
Develop a thorough understanding of how different business areas, such as architecture and infrastructure, work together to achieve business objectives.
Conduct in-depth analysis with interpretive thinking to identify issues and develop innovative solutions.
Act as an advisor or coach to mid-level developers and analysts, assigning work as needed.
Assess risks appropriately when making business decisions, ensuring compliance with applicable laws, rules, and regulations, adhering to policies, applying sound ethical judgment, and managing and reporting control issues transparently.
Qualifications:
6-10 years of relevant experience in application development or systems analysis, with expertise in Pyspark, SQL, and Python.
Extensive experience in system analysis and software application programming.
Proven track record in managing and implementing successful projects.
Subject Matter Expert (SME) in at least one area of application development.
Ability to quickly adjust priorities as needed.
Demonstrated leadership and project management skills.
Clear and concise written and verbal communication skills.
Education:
Bachelor's degree or equivalent experience.
Desired Skills and Experience
Python
SQL
Pyspark
Data Architecture
Application Development
Principal Data Consultant (India)
Data Scientist Job 17 miles from Union
The Principal Data Consultant is expected to participate in complex and multidisciplinary data and BI projects. You will be working with our consulting team designing, implementing, and deploying data centered solutions to a wide variety of clients across industry verticals and products. Candidates for this role should possess knowledge of the Data integration, DW and BI capabilities (both functional and technical). Candidates should have experience migrating & integrating data between different platforms and can act as a knowledgeable liaison with business stakeholders and project teams. Additionally, the candidate should have the ability to uncover business requirements, develop a technical strategy and be able to create and effectively demonstrate solutions that address customer requirements.
This is a hybrid role that will require travel into the Mumbai office twice a week starting in the first half of 2025.
Responsibilities Include:
Synthesize information to produce impactful visual presentations that assist decision-makers
Support the build of Data Warehouses, Data Marts and Operational Data Stores
Identify data needs, providing guidance on how to extract, transform and load data, and recommend approaches to integrate with master data
Collaborate with project team for data migration testing and resolving defects
Produce highly detailed document artifacts and maintain them for the duration of projects
Be able to translate functional specifications into ETL packages for Data integration/migration
Perform data profiling and analysis. Develop data lineage and source to target mapping
Partner with the client for data extraction and cleansing
Conduct data mapping , data loading and create release management plans for complex migrations
Provide input and feedback regarding ETL processes.
Demonstrate ability to learn and research advanced data technologies and concepts, learning new skills and software as necessary
Basic Qualifications:
Requires Bachelor's degree in Computer Science, Software Engineering, Management Information Systems, or a related field, plus five years of experience with Salesforce development or equivalent combination of education and experience
8+ years' experience in data engineering working on different BI and ETL tools
Proficient with BI Analytics applications like Tableau, PowerBI, Cognos, etc
Hands on experience with ETL tools like SSIS, Informatica, Talend, etc
Hands on experience with Data Warehouse tools like Snowflake, Amazon Redshift etc
Experience with database systems (MS SQL, Oracle, MySQL, etc.) and query languages (T-SQL, PL/SQL, etc.)
Exposure in data extraction, transformation, and data loading techniques
Experience creating and using Entity Relationship and Data Flow diagrams to communicate complex migration and transformations to clients
Experience with the complete software project lifecycle, recognizing the importance of data conversion and the time commitment involved
Must have experience working in projects with Agile Management process
Experience working with Salesforce CRM and Netsuite ERP systems is desirable
Passion for data accuracy and completeness
Can work in a fast-paced, fast-growth environment and manage priorities across multiple concurrent activities
#J-18808-Ljbffr
Data Engineer
Data Scientist Job 6 miles from Union
Data Tech team is looking for a Data Engineer to join a diverse team dedicated to providing best in class data services to our customers, stakeholders and partners. As a part of our organization, you will work with our Client Engineers, Data Scientist and various BUs to define solutions for operationalizing data-driven decision making in a cost effective and scalable manner.
Primary skillset: Aws, Glue, step functions, Python, Redshift
Qualifications:
Bachelor's degree in Computer Science, Software Engineering, MIS or equivalent combination of education and experience
Experience implementing, supporting data lakes, data warehouses and data applications on AWS for large enterprises
AWS Solutions Architect or AWS Big Data Certification preferred
Programming experience with Java, Python/ Scala, Shell scripting
Solid experience of AWS services such as CloudFormation, S3, Glue, EMR/Spark, RDS, Redshift, DynamoDB, Lambda, Step Functions, IAM, KMS, SM etc.
Solid experience implementing solutions on AWS based data lakes
Experience implementing metadata solutions leveraging AWS non-relational data solutions such as ElastiCache and DynamoDB
Experience in AWS data lake, data warehouse and business analytics
Experience in system analysis, design, development, and implementation of data ingestion pipeline in AWS
Knowledge of ETL/ ELT
Working experience with Hadoop, HDFS, SQOOP, Hive, Python, and Spark is desired
Experience working on Agile projects
Requirements :
3 - 5 years of experience as Data Engineer
Experience developing business applications using NoSQL/ SQL databases.
Experience working with Object stores (S3) and JSON is must have
Should have good experience with AWS Services - API Gateway, Lambda, Step Functions, SQS, DynamoDB, S3, ElasticSearch.
Serverless application development using AWS Lambda.
Responsibilities:
Designing, building and maintaining efficient, reusable and reliable architecture and code
Ensure the best possible performance and quality of high scale web applications and services
Participate in the architecture and system design discussions
Independently perform hands on development and unit testing of the applications
Collaborate with the development team and build individual components into complex enterprise web systems
Work in a team environment with product, frontend design, production operation, QE/ QA and cross functional teams to deliver a project throughout the whole software development cycle
Responsible to identify and resolve any performance issues
Keep up to date with new technology development and implementation
Participate in code review to make sure standards and best practices are met
Plus
Experience with business intelligence tools such as Tableau, Power BI or equivalent
Aware of Machine learning algorithms (supervised/unsupervised)
Nice to have AWS developer certificate
Senior Big Data Engineer (PySpark & Hadoop) - No C2C
Data Scientist Job 11 miles from Union
Senior Big Data Engineer (PySpark & Hadoop)
HIRING DRIVE - DAY of INTERVIEW - Thurs, 03/13 and Fri, 03/14 - all interviews will be conducted on these days.
Job Description:
We are seeking an experienced Senior Big Data Engineer with a strong background in PySpark and Hadoop to join our direct client in the banking industry. The ideal candidate will have a deep understanding of large-scale data processing and optimization, along with hands-on experience in building high-performance data solutions.
Key Responsibilities:
Minimum 8+ years of experience in big data development using PySpark within the banking/financial sector.
Expertise in designing, developing, and optimizing large-scale data processing applications to ensure performance and efficiency.
In-depth proficiency with PySpark and the Apache Spark ecosystem for distributed data processing.
Strong programming skills in Python with a focus on PySpark.
Comprehensive understanding of Hadoop architecture, Hive, and HDFS for data storage and retrieval.
Advanced proficiency in SQL development, including query optimization and performance tuning for high-volume data processing.
This role is a great fit for someone who thrives in banking and financial environments, handling complex data pipelines and optimizing large-scale big data applications.
“Mindlance is an Equal Opportunity Employer and does not discriminate in employment on the basis of - Minority/Gender/Disability/Religion/LGBTQI/Age/Veterans.”
Lead Data Engineer
Data Scientist Job 27 miles from Union
This role is for candidates who can work on W2
About the Company:
A large-scale project that would process 100s TB of ERP data and other relevant data. Auditors would use this processed information to opine on the client engagements and provide value-added services to their clients. The project currently uses Azure data technologies and Databricks.
About the Role:
We are looking for an experienced professional with at least 6+ years of hands-on experience with Databricks technology.
Responsibilities:
Design, build, and manage scalable data architectures using Databricks and Azure data technologies.
System Design and implement modern data lakehouse solutions, ensuring seamless data integration, storage, and processing.
Collaborate with cross-functional teams to gather and translate business requirements into effective technical designs.
Ensure data quality and governance practices are implemented throughout the data lifecycle.
Optimize data workflows for performance, reliability, and security using Azure one lake, Databricks Workflow and Databricks Compute, Unity catalog, Azure Synapse and Powerbi.
Develop and enforce best practices in data modeling, pipeline design, and storage architecture.
Conduct regular assessments of data systems to identify and address performance bottlenecks or areas for improvement.
Qualifications:
10+ Years of total IT experience in Data engineering and Data warehouse project development.
Required Skills:
6+ years of hands-on experience with Azure Databricks and expertise in PySpark & Python development.
Proven expertise in designing and managing scalable data architectures. Experience in Databricks Serverless is plus.
6+ years' experience with Azure Synapse, Data Factory, and other Azure data technologies.
8+ years' experience in data modeling, 6+ Years experience with Data pipelines design and implementation, and cloud storage architecture.
Deep understanding of data quality and governance practices, and hands-on with Data quality and governance using Unity catalog or azure purview.
Ability to collaborate with cross-functional teams and translate business requirements into technical designs.
Preferred Skills:
Familiarity with other modern data processing tools and technologies.
Previous experience in a similar role working with large-scale data systems.
Lead Data Engineer - Data Reliability
Data Scientist Job 17 miles from Union
On any given day at Disney Entertainment & ESPN Technology, we're reimagining ways to create magical viewing experiences for the world's most beloved stories while also transforming Disney's media business for the future. Whether that's evolving our streaming and digital products in new and immersive ways, powering worldwide advertising and distribution to maximize flexibility and efficiency, or delivering Disney's unmatched entertainment and sports content, every day is a moment to make a difference to partners and to hundreds of millions of people around the world.
A few reasons why we think you'd love working for Disney Entertainment & ESPN Technology
Building the future of Disney's media business: DE&E Technologists are designing and building the infrastructure that will power Disney's media, advertising, and distribution businesses for years to come.
Reach & Scale: The products and platforms this group builds and operates delight millions of consumers every minute of every day - from Disney+ and Hulu, to ABC News and Entertainment, to ESPN and ESPN+, and much more.
Innovation: We develop and execute groundbreaking products and techniques that shape industry norms and enhance how audiences experience sports, entertainment & news.
The Data Reliability Engineering team for Disney's Product and Data Engineering team is responsible for maintaining and improving the reliability of Disney Entertainment's big data platform, which processes hundreds of terabytes of data and billions of events daily.
About The Role
The Lead Data Engineer will help us in the ongoing mission of delivering outstanding services to our users allowing Disney Entertainment to be more data-driven. You will work closely with our partner teams to monitor and drive improvements for reliability and observability of their critical data pipelines and deliverables. This is a high-impact role where your work informs decisions affecting millions of consumers, with a direct tie to The Walt Disney Company's revenue. You will be making an outsized impact in an organization that values data as its top priority. We are a tight and driven team with big goals, so we seek people who are passionate about solving the toughest challenges and working at scale, using, supporting, and building distributed systems in a fast-paced collaborative team environment. We also support a healthy work-life-balance.
Responsibilities
Assist in designing and developing a platform to support incident observability and automation. This team will be required to build high quality data models and products that monitor and reports on data pipeline health and data quality.
Lead project work efforts internally and externally setting project deliverables, review design documents, perform code reviews and help mentor junior members of the team.
Collaborate with engineering teams to improve, maintain, performance tune, and respond to incidents on our big data pipeline infrastructure.
Own building out key components for observability and intelligent monitoring of data pipelines and infrastructure to achieve early and automated anomaly detection and alerting. Present your research and insights to all levels of the company, clearly and concisely.
Build solutions to continually improve our software release and change management process using industry best practices to help DSS maintain legal compliance.
Basic Qualifications
7+ years of experience working on mission critical data pipelines and ETL systems.
7+ years of hands-on experience with big data technology, systems and tools such as AWS, Hadoop, Hive, and Snowflake
Detailed problem-solving approach, coupled with a strong sense of ownership and drive
A passionate bias to action and passion for delivering high-quality data solutions
Expertise with common Data Engineering languages such as Python, Scala, Java, SQL and a proven ability to learn new programming languages
Experience with workflow orchestration tools such as Airflow
Deep understanding of end-to-end pipeline design and implementation.
Attention to detail and quality with excellent problem solving and interpersonal skills
Preferred Qualifications
Advanced degrees are a plus.
Strong data visualizations skills to convey information and results clearly
Ability to work independently and drive your own projects.
Exceptional interpersonal and communication skills.
Impactful presentation skills in front of a large and diverse audience.
Experience with DevOps tools such as Docker, Kubernetes, Jenkins, etc.
Innate curiosity about consumer behavior and technology
Experience with event messaging frameworks like Apache Kafka
A fan of movies and television is a strong plus.
Required Education
Bachelor's degree in Computer Science, Information Systems, Software, Electrical or Electronics Engineering, or comparable field of study, and/or equivalent work experience
Additional Information
#DISNEYTECH
The hiring range for this position in Santa Monica, California is $152,200 to $204,100 per year, in Bristol, Connecticut is $152,200 to $204,100 per year, in Seattle, Washington is $159,500 to $213,900 per year. in New York City, NY is $159,500 to $213,900 per year, and in San Francisco, California is $166,800 to $223,600 per year. The base pay actually offered will take into account internal equity and also may vary depending on the candidate's geographic region, job-related knowledge, skills, and experience among other factors. A bonus and/or long-term incentive units may be provided as part of the compensation package, in addition to the full range of medical, financial, and/or other benefits, dependent on the level and position offered.
#J-18808-Ljbffr
Senior Data Engineer
Data Scientist Job 9 miles from Union
We are :
At Synechron, we believe in the power of digital to transform businesses for the better. Our global consulting firm combines creativity and innovative technology to deliver industry-leading digital solutions. Synechron's progressive technologies and optimization strategies span end-to-end Artificial Intelligence, Consulting, Digital, Cloud & DevOps, Data, and Software Engineering, servicing an array of noteworthy financial services and technology firms. Through research and development initiatives in our FinLabs we develop solutions for modernization, from Artificial Intelligence and Blockchain to Data Science models, Digital Underwriting, mobile-first applications and more. Over the last 20+ years, our company has been honored with multiple employer awards, recognizing our commitment to our talented teams. With top clients to boast about, Synechron has a global workforce of 14,500+, and has 58 offices in 21 countries within key global markets.
Our Challenge :
We are seeking to hire strong Data Engineers with expertise in Snowflake and Python along with strong SQL Expertise.
Additional Information :
The base salary for this position will vary based on geography and other factors. In accordance with law, the base salary for this role if filled within Iselin, NJ is $130k - $145k/year & benefits (see below).
Responsibilities :
Hands-on development experience with Snowflake features such as Snow SQL; Snow Pipe; Python; Tasks; Streams; Time travel; Zero Copy Cloning; Optimizer; Metadata Manager; data sharing; and stored procedures.
Experience in Data warehousing - OLTP, OLAP, Dimensions, Facts, and Data modeling.
Need to have working knowledge of MS Azure configuration items with respect to Snowflake.
Developing EL pipelines in and out of data warehouse using combination of Data bricks, Python and Snow SQL.
Strong understanding or Snowflake on Azure Architecture, design, implementation and operationalization of large-scale data and analytics solutions on Snowflake Cloud Data Warehouse.
Developing scripts UNIX, Python etc. to Extract, Load and Transform data, as well as other utility functions.
Provide production support for Data Warehouse issues such data load problems, transformation translation problems
Translate mapping specifications to data transformation design and development strategies and code, incorporating standards and best practices for optimal execution.
Understanding data pipelines and modern ways of automating data pipeline using cloud based testing and clearly document implementations, so others can easily understand the requirements, implementation, and test conditions.
Perform code reviews to ensure fit to requirements, optimal execution patterns and adherence to established standards.
Requirements:
Minimum 8 years of designing and implementing an operational production grade large-scale data solution on Microsoft Azure Snowflake Data Warehouse.
Including hands on experience with productionized data ingestion and processing pipelines using Python, Data bricks, Snow SQL
Excellent understanding of Snowflake Internals and integration of Snowflake with other data processing and reporting technologies
Excellent presentation and communication skills, both written and verbal, ability to problem solve and design in an environment with unclear requirements.
It would be great if you also had:
Detail-oriented, ability to turn deliverables around quickly with a high degree of accuracy
Strong analytical skills, ability to interpret business requirements and produce functional and technical design documents.
Good time management skills - Ability to prioritize and multi-task, handling multiple efforts at once.
Strong desire to understand and learn domain.
Experience in a financial services/banking industry
Ability to work in a fast-paced environment; to be flexible and learn quickly.
Ability to multi-task with attention to detail/ prioritize tasks.
We can offer you:
A highly competitive compensation and benefits package
A multinational organization with 58 offices in 21 countries and the possibility to work abroad
Laptop and a mobile phone
15 days of paid annual leave (plus sick leave and national holidays)
A comprehensive insurance plan including: medical, dental, vision, life insurance, and long-/short-term disability (plans vary by region)
Retirement savings plans
A higher education certification policy
Extensive training opportunities, focused on skills, substantive knowledge, and personal development
On-demand Udemy for Business for all Synechron employees with free access to more than 5000 curated courses
Coaching opportunities with experienced colleagues from our Financial Innovation Labs (FinLabs) and Center of Excellences (CoE) groups
Cutting edge projects at the world's leading tier-one banks, financial institutions and insurance firms
A flat and approachable organization
A truly diverse, fun-loving and global work culture
SYNECHRON'S DIVERSITY & INCLUSION STATEMENT
Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative ‘Same Difference' is committed to fostering an inclusive culture - promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more. All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicant's gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law.
Celonis Data Engineer
Data Scientist Job 12 miles from Union
4 to 10 years with experience as a Process Analyst SME in DW domain with at least 2-3 years of experience working on Celonis Customer project-based experience working as an Analyst or Data Engineer on Celonis implementations Celonis Analyst Data Engineer.
Certified Knowledge of real-time business cases in Process Mining.
Proficiency with SQL or other programming languages (Python) and a strong interest in Data Mining and Process Mining.
Proficiency with UNIX/Shell scripting.
Writing SQL queries and shell scripts for Data acquisition.
Strong understanding of execution-oriented solutions in Celonis.
Excellent analytical skills well organized and known for being a quick learner.
Strong communication skills and enjoy interacting with various customers to understand and interpret business processes.
If your Interested, please share us your resume on
****************************
Life At Capgemini
Capgemini supports all aspects of your well-being throughout the changing stages of your life and career. For eligible employees, we offer:
Flexible work
Healthcare including dental, vision, mental health, and well-being programs
Financial well-being programs such as 401(k) and Employee Share Ownership Plan
Paid time off and paid holidays
Paid parental leave
Family building benefits like adoption assistance, surrogacy, and cryopreservation
Social well-being benefits like subsidized back-up child/elder care and tutoring
Mentoring, coaching and learning programs
Employee Resource Groups
Disaster Relief
Disclaimer
Capgemini is an Equal Opportunity Employer encouraging diversity in the workplace. All qualified applicants will receive consideration for employment without regard to race, national origin, gender identity/expression, age, religion, disability, sexual orientation, genetics, veteran status, marital status or any other characteristic protected by law.
This is a general description of the Duties, Responsibilities and Qualifications required for this position. Physical, mental, sensory or environmental demands may be referenced in an attempt to communicate the manner in which this position traditionally is performed. Whenever necessary to provide individuals with disabilities an equal employment opportunity, Capgemini will consider reasonable accommodations that might involve varying job requirements and/or changing the way this job is performed, provided that such accommodations do not pose an undue hardship.
Capgemini is committed to providing reasonable accommodations during our recruitment process. If you need assistance or accommodation, please reach out to your recruiting contact.
Click the following link for more information on your rights as an Applicant **************************************************************************
Lead Data Engineer, Data Reliability
Data Scientist Job 17 miles from Union
Lead Data Engineer, Data ReliabilityJob ID 10103077 Location Santa Monica, California, United States / Bristol, Connecticut, United States / San Francisco, California, United States / New York, New York, United States / Seattle, Washington, United States Business Disney Entertainment & ESPN Technology Date posted Jan. 21, 2025Job Summary:
On any given day at Disney Entertainment & ESPN Technology, we're reimagining ways to create magical viewing experiences for the world's most beloved stories while also transforming Disney's media business for the future. Whether that's evolving our streaming and digital products in new and immersive ways or delivering Disney's unmatched entertainment and sports content, every day is a moment to make a difference to partners and to hundreds of millions of people around the world.
The Data Reliability Engineering team for Disney's Product and Data Engineering team is responsible for maintaining and improving the reliability of Disney Entertainment's big data platform, which processes hundreds of terabytes of data and billions of events daily.
The Lead Data Engineer will help us in the ongoing mission of delivering outstanding services to our users allowing Disney Entertainment to be more data-driven. You will work closely with our partner teams to monitor and drive improvements for reliability and observability of their critical data pipelines and deliverables. This is a high-impact role where your work informs decisions affecting millions of consumers, with a direct tie to The Walt Disney Company's revenue. We seek people who are passionate about solving the toughest challenges and working at scale, using, supporting, and building distributed systems in a fast-paced collaborative team environment.
Assist in designing and developing a platform to support incident observability and automation.
Lead project work efforts internally and externally setting project deliverables, review design documents, perform code reviews and help mentor junior members of the team.
Collaborate with engineering teams to improve, maintain, performance tune, and respond to incidents on our big data pipeline infrastructure.
Own building out key components for observability and intelligent monitoring of data pipelines and infrastructure.
Build solutions to continually improve our software release and change management process.
Minimum Qualifications:
7+ years of experience working on mission critical data pipelines and ETL systems.
7+ years of hands-on experience with big data technology, systems and tools such as AWS, Hadoop, Hive, and Snowflake.
Expertise with common Data Engineering languages such as Python, Scala, Java, SQL.
Experience with workflow orchestration tools such as Airflow.
Deep understanding of end-to-end pipeline design and implementation.
Bachelor's degree in Computer Science, Information Systems, Software, Electrical or Electronics Engineering, or comparable field of study.
#DISNEYTECH
#J-18808-Ljbffr
Data Engineer
Data Scientist Job 14 miles from Union
10+ Year experience as Data Engineer with experience in building ETL/Data pipelines with cloud technologies
4+ years extensive experience in designing, developing, and implementing scalable data pipelines using Databricks to ingest, transform, and store structured and unstructured data.
5+ years' experience programming languages such as Python/PySpark and query languages like SQL
5+ years' experience in building metadata driven data ingestion pipelines using ADF
5+ years' experience in analyzing, optimizing, and tuning existing data pipelines for performance, reliability, and efficiency
2+ years' experience in implementing MLOps practices to streamline the deployment and management of machine learning models.
2+ years' experience in utilizing Apache Airflow for job orchestration and workflow management
Familiarity with CI/CD tools and practices for automating the deployment of data engineering solutions.
Experience in collaborating with data scientists, analysts, and other stakeholders to understand business requirements and translate them into technical solutions
Knowledge/Experience in implementing security measures and standard processes to ensure data privacy and compliance with regulatory standards
In-depth knowledge of data engineering concepts, ETL processes, and data architecture principles.
The pay range for this role is US$145k - US$150k per annum including any bonuses or variable pay. Tech Mahindra also offers benefits like medical, vision, dental, life, disability insurance and paid time off (including holidays, parental leave, and sick leave, as required by law). Ask our recruiters for more details on our Benefits package. The exact offer terms will depend on the skill level, educational qualifications, experience, and location of the candidate.
Tech Mahindra is an Equal Employment Opportunity employer. We promote and support a diverse workforce at all levels of the company. All qualified applicants will receive consideration for employment without regard to race, religion, color, sex, age, national origin or disability. All applicants will be evaluated solely on the basis of their ability, competence, and performance of the essential functions of their positions with or without reasonable accommodations. Reasonable accommodations also are available in the hiring process for applicants with disabilities. Candidates can request a reasonable accommodation by contacting the company ADA Coordinator at ADA_******************************.”
Data Engineer
Data Scientist Job 17 miles from Union
IntePros Inc is looking for a Business Intelligence Engineer (BIE) to be a key partner in driving insights and reporting that support our growing suite of advertising products. As a BIE, you will extract value from large and complex data sets, influence business decisions, and support the evolution of products. You will play a crucial role in streamlining metric reporting for our operating cadences, such as daily, weekly, and monthly business reviews.
The ideal candidate is a self-starter who thrives in ambiguity, takes ownership of initiatives, and has a bias for action in delivering business impact through data analytics. You will work closely with internal and external stakeholders, balancing independent work with collaboration in a highly communicative, office-based team environment.
Key Responsibilities:
Own the design, development, and maintenance of ongoing metrics, reports, dashboards, and analyses to drive key business decisions.
Recommend new metrics, techniques, and strategies to improve team performance and future measurement.
Define data elements and data structures to enable analytical and reporting capabilities for business development teams.
Design and implement operational best practices for reporting and analytics, enabling scalability.
Prepare and present business reviews to senior management, highlighting progress and roadblocks.
Work with engineering teams and data warehouses to capture and store key data points effectively.
Aggregate data from multiple sources and deliver it in a digestible, actionable format for decision-making.
Participate in strategic and tactical planning discussions.
Daily Responsibilities:
ETL processes - Data manipulation, transformation, ingestion from internal ad sales sources.
Prepare data marts and support Tableau dashboards for finance teams.
Focus on maintaining and enhancing ~2 Tableau dashboards while collaborating with the team.
Participate in LCSUS organization stakeholder engagement (~90% core stakeholder).
Attend biweekly/weekly business review calls with leadership.
Work independently on assigned tasks with structured team collaboration and prioritization meetings every other day.
Less involvement in deep data dives or dashboard feature development-focus on refining existing dashboards and integrating new data.
Qualifications:
SQL - Ability to retrieve, manipulate, and optimize large datasets.
Tableau experience - Building, maintaining, and enhancing dashboards for business insights.
Strong communication skills in a data-driven environment (data mart design, analytics).
Preferred:
Bachelor's Degree in Math, Statistics, Computer Science, or Marketing.
Master's Degree preferred.
Experience working with advertising analytics, finance teams, and internal stakeholders.
Snowflake Data Engineer
Data Scientist Job 17 miles from Union
Job Title: Snowflake Data Engineer / Snowflake Admin/Data Analyst
Job Summary: The Snowflake Data Engineer is responsible for designing, implementing, and optimizing data solutions using the Snowflake Data Cloud platform. This role involves managing Snowflake environments, ensuring data security, and performing data analysis to support business decision-making.
This role will require the individual to be able to function effectively as an Snowflake administrator as well as a be able to produce insights that will help with the management and growth of the business.
Key Responsibilities:
Database Management: Configure and maintain Snowflake environments, including warehouses, databases, and other objects. Ensure optimal performance and cost-efficiency.
Data Security: Implement and manage security measures within Snowflake, including data encryption, access controls, and network policies. Ensure compliance with data protection regulations.
Data Integration: Develop and manage data pipelines to extract, transform, and load (ETL) data into Snowflake from various sources.
Performance Optimization: Monitor and optimize the performance of Snowflake environments, including query performance tuning and resource management.
Data Analysis: Perform detailed data analysis using Snowflake to identify trends, patterns, and insights. Develop and maintain dashboards, reports, and visualizations.
Collaboration: Work closely with cross-functional teams to understand data requirements and deliver insights. Provide technical support and training to users on Snowflake features.
Documentation: Document database schemas, configurations, procedures, and best practices. Maintain data catalogs and dictionaries.
Cost Management: Monitor and optimize Snowflake credit usage to control costs without compromising performance.
Qualifications:
Education: Bachelor's degree in Data Science, Computer Science, Information Technology, or a related field.
Experience: 3-5 years of relevant professional experience as a Data Engineer, Database Administrator, or Data Analyst, preferably with experience in Snowflake.
Skills: Strong proficiency in SQL and data analysis tools. Familiarity with Snowflake's architecture and features. Experience with data visualization tools (e.g., Tableau, Power BI) and programming languages (e.g., Python, R) is a plus.
Certifications: Relevant certifications in data engineering, database administration, or Snowflake are advantageous such as “SnowPro Advanced: Data Analyst”
Preferred Attributes:
Strong analytical and problem-solving skills
Excellent communication and collaboration skills
Ability to work independently and as part of a team
Attention to detail and a commitment to data accuracy
Supporting system documentation and workflow diagrams
Data Engineer
Data Scientist Job 16 miles from Union
9+ years of experience in IT development project Strong Experience in Tableau development and Good data analytics skills. Strong understanding of visual analytics and ability to design efficient workbooks. Experience with agile development methodologies Quick learner and strong work ethics is important to be successful in this role.Good to have - Knowledge of Hadoop ecosystem (especially spark, hive). Solid knowledge in writing advanced SQL queries Analyze data in Google Big Query through SQL.