Data Scientist III
Data Scientist Job 15 miles from Hawthorne
Glocomms has partnered with a global tech organization dedicated to transforming lives through spoken-word entertainment. They work with top creators to produce and share audio stories with millions of listeners worldwide. In this role, you will lead the core playback logic from start to finish, ensuring the best playback experience across Android, iOS, and Web platforms. This includes managing content delivery infrastructure, security, digital rights management, and real-time processing of listening data. You'll also spearhead initiatives to advance the state of the art in Internet Audio
Team Overview
Our data science team partners with marketing, content, product, and technology teams to solve challenges using advanced ML, DL, and NLP techniques. We operate in an agile environment, managing the lifecycle of research and model development.
Responsibilities
Optimize customer interactions with validated models
Create data engineering pipelines
Innovate with cutting-edge applications
Develop data visualizations
Collaborate with data scientists, ML experts, and engineers
Share ideas and learn from the team
Basic Qualifications
Experience in modeling and research design
MS in a quantitative field or PhD with relevant experience
Proficiency in SQL and Python
Experience with AWS and Big Data Engineering
Agile Software Development experience
Preferred Qualifications
Expertise in paid and organic marketing
Experience with container platforms and streaming data
Proficiency in R, RShiny, and Scala
Data Scientist
Data Scientist Job 23 miles from Hawthorne
We are building a world class systematic data platform which will power the next generation of our systematic portfolio engines.
The systematic data group is looking for a Data Scientist to join our growing team. The team consists of content specialists, data scientists, analysts and engineers who are responsible for discovering, maintaining and analyzing sources of alpha for our portfolio managers.
This is an opportunity for individuals who are passionate about quantitative investing. The role builds on individual's knowledge and skills in four key areas of quantitative investing: data, statistics, technology and financial markets.
Principal Responsibilities
Research potential alpha sources, and present to portfolio managers and quantitative analysts
Utilize and maintain world-class data processing and transformation engines
Build technology tools to acquire and tag datasets
Engage with vendors, brokers to understand characteristics of datasets
Interact with portfolio managers and quantitative analysts to understand their use cases and recommend datasets to help maximize their profitability
Analyze datasets to generate key descriptive statistics
Qualifications/Skills Required
Ph.D. or Masters in computer science, mathematics, statistics or other field requiring quantitative analysis
3 + years of financial industry experience preferred
Programming expertise in Python, C++, Java or C#
Programming skills in SQL, PL-SQL or T-SQL
Strong problem-solving skills
Strong communication skills
NEW YORK ONLY BASED "NO RELOCATION OR REMOTE PLEASE DO NOT APPLY OTHERWISE " Senior Data Scientist - Generative AI - Agents and LLM developer
Data Scientist Job 23 miles from Hawthorne
Scalata is an AI-driven finance automation platform located in New York, NY. Our vision is to democratize investment data using generative AI to empower credit decision making. We aim to drive global economic growth, foster financial inclusion, and create a positive ripple effect that empowers businesses, communities, and economies worldwide.
NEW YORK ONLY BASED "NO RELOCATION OR REMOTE PLEASE DO NOT APPLY OTHERWISE " Senior Data Scientist - Generative AI - Agents and LLM developer
Role Description
This is a full-time on-site role for a Data Scientist specializing in Machine Learning and Artificial Intelligence with a focus on Natural Language Processing/Understanding. The Data Scientist will be responsible for tasks such as data analysis, statistical modeling, data visualization, and implementing AI models to automate the credit lifecycle.
Qualifications
Data Science
Data Analytics and Data Analysis skills
Experience in Data Visualization
Strong programming skills in Python, PySpark
Extensive knowledge of Machine Learning and Artificial Intelligence algorithms
Highly skilled development of Agents using Langchain or other frameworks and chatbot management software development
Practical knowledge and experience in building large language models
Experience with Natural Language Processing
Strong problem-solving and critical thinking abilities
Bachelor's or Master's degree in Computer Science, Statistics, Data Science, or related field
Lead Data Scientist
Data Scientist Job 23 miles from Hawthorne
About the Job:
As a Lead Data Scientist, you will transform complex data into actionable insights using advanced quantitative methods. You will analyze and interpret large datasets to inform business decisions, and craft compelling presentations to drive business strategy and optimize customer engagement.
A Day in the Life:
Provide actionable insights to stakeholders through data analysis, research, and modeling.
Integrate and manage internal and external data sources, including structured and unstructured data.
Develop and apply advanced quantitative methods, including statistical models and machine learning techniques.
Analyze customer behavior and market trends to inform business strategies and optimize ROI.
Prepare and deliver clear, insightful presentations to various audiences.
Conduct multi-channel analysis, leveraging online and offline data to drive business decisions.
Mentor and manage junior data scientists, guiding workflow and development.
What You Will Need:
Bachelor's degree in Mathematics, Statistics, Economics, Computer Science, Engineering, or other natural sciences (Master's degree preferred).
5+ years of experience in open-source programming, applied statistics, and machine learning.
Proven expertise in relational database management, data wrangling, and statistical analysis.
Excellent communication and presentation skills, with experience presenting to upper management.
Strong project and team management skills, with ability to manage workflow of 1-3 data scientists or analysts.
Experience integrating data scientific modeling into data visualization platforms (e.g., Tableau, PowerBI).
Ability to handle multiple projects, prioritize tasks, and adapt to changing requirements in a fast-paced environment.
Our Global Benefits:
My Time Off (MTO) - our flexible approach to time off that allows you to take the time you need and enjoy it!
Career Progression - we offer personalized development opportunities and clear career pathways.
Health and wellbeing programs that provide you access to different services and offerings to prioritize your health.
Company Savings Plans to help you plan for the future.
Parental Leave benefits for all new parents.
Salary
$110,000 - $145,000 annually.
The salary range for this position is noted within this job posting. Where an employee or prospective employee is paid within this range will depend on, among other factors, actual ranges for current/former employees in the subject position; market considerations; budgetary considerations; tenure and standing with the company (applicable to current employees); as well as the employee's/applicant's background, pertinent experience, and qualifications.
About 90North
*************************************
At 90NORTH, a software intelligence company, we harness proprietary software products to uncover unexpected insights and solve complex business problems. Our diverse team of solution architects, comprising social scientists, medical experts, business strategists, and creative technical designers, collaborate to help clients navigate the challenges of exploding data and AI paradigm shifts. We empower informed decision-making and drive impact through our innovative software products, including Intention Decoder and Hate Audit, which reveal the underlying motivations and sentiments behind human behavior.
For U.S. Job Seekers
It is the policy of IPG Health and any of its affiliates to provide equal employment opportunities to all employees and applicants for employment without regard to race, religion, color, ethnic origin, gender, gender identity, age, marital status, veteran status, sexual orientation, disability, or any other basis prohibited by applicable federal, state, or local law. EOE/AA/M/D/V/F.
Data Engineer
Data Scientist Job 23 miles from Hawthorne
Salary: $125 - 200k Base + Equity & Benefits
An AI-Powered Startup providing a system of intelligence for life sciences. They're building deep integrations across pharma's data systems to build a layer of artificial intelligence to power broad decision making.
The goal is to index the entire universe of pharma data, empower hundred-step AI agents that will drive the next generation of pharma decision-making, and directly impact the rate of progress in both AI and human health.
They're partnering with some of the biggest names in pharma and have strong early traction with a healthy pipeline to grow into.
Opportunity
Data is the lifeblood that powers pharma intelligence, insights, and decision-making. We need to stay up to date on the universe of events happening in pharma and as a Data Engineer you will own and lead data engineering projects including web crawling, data ingestion, data modeling, and search.
You will design and build robust, scalable data pipelines to support our AI models.
You'll manage databases with focus on optimizing data storage and retrieval, ensuring speed and efficiency.
You'll also lead the way to designing data models that best support our users' experience and workflows.
What we're looking for..
Experience with Python, Postgres SQL, and AWS
A strong track record of building production ready / robust Data Pipelines
Experience with multimodal processing of data, as well as chunking, embedding, and vector databases.
Prior experience in a Startup environment
Sound of interest? - Get in touch!
Technology and Data Analytics Analyst/Associate
Data Scientist Job 23 miles from Hawthorne
MONTICELLOAM, LLC and its affiliates (“Monticello”) is a real estate and asset-based lender providing asset management and comprehensive capital solutions for healthcare, multifamily, and commercial real estate assets throughout the US. Monticello is seeking team players who can work in a collaborative environment and possess drive, integrity, creativity, compassion, and a strong work ethic.
We are looking for a Technology and Data Analytics Analyst/Associate for our New York City office to support investment management teams such as originations, underwriting and asset management as well as finance, accounting, compliance, investor relations, and human resources.
The Technology and Data Analytics Analyst/Associate's primary responsibilities are:
Assist in the design and implementation of a data warehouse, including the setup of various data tables and flow of information from various sources into and out of the data warehouse to support analytics, dashboarding, and automated process flow
Proactively analyze investment related data to answer key questions from internal and external stakeholders including executive management, investment team members, and investors
Develop custom investment related reports across healthcare and multi-family real estate debt and equity
Perform qualitative and quantitative research on public and proprietary data sets and technologies to develop insights, create presentations, and make actionable business recommendations
Leverage AI tools to automate data entry and analysis
Evaluate individual investment and portfolio performance across asset class, geography, and other segmentations to identify key trends
Break complex processes down into their individual components and identify areas where data and technology can increase efficiencies, effectiveness, and scalability
Sustain and oversee the data management systems critical to the firm's success.
Job Requirements:
Bachelor's Degree
Finance, accounting, credit, legal, real estate and/or business background
Established organizational skills and ability to simultaneously handle multiple projects
Extensive technical skills including, iLevel, Snowflake, Tableau, Monday.com, SQL, Python
Experience sourcing and analyzing data through APIs, data scraping, and database querying
Ability to quickly learn new tools and technologies
Interest in financing healthcare, senior housing, multi-family housing and/or renewable energy preferred
Effective oral and written communication and interpersonal skills to liaise with borrowers, financing counterparties, and other external parties
Advanced financial analytical proficiency along with the ability to “see the big picture”
Strong grasp of logic and data analytics
Passion for the firm and passion for what we do
Intellectual curiosity and a desire to understand the purpose behind their work
We firmly believe that the most innovative solutions arise from a diverse, collaborative environment that welcomes varied perspectives and backgrounds. We are dedicated to fostering an inclusive workplace that not only embraces differences but also empowers all individuals, providing them with opportunities to unleash their entrepreneurial spirit. We are an equal opportunity employer.
This opportunity will offer a competitive base salary and performance-based bonuses. The base salary for this position falls within the range of $90,000 to $110,000 per year. The specific compensation package will be determined based on the qualifications of the selected candidate at the time of hiring. Additionally, employees may be eligible for discretionary bonuses, contingent upon their annual performance reviews.
Data Analyst
Data Scientist Job 11 miles from Hawthorne
Our client is seeking a Data Analyst to join their team! This position allows for applicants in Irving, TX; Basking Ridge, NJ; or Atlanta, GA.
Produce, edit, and proofread clear and concise content for multiple channels, including voice channels, web, app, agent systems, and other applicable platforms
Collaborate closely with UX Writers, Content Strategists, and Translation services to ensure high-quality copy and Content Quality Assurance (CQA)
Maintain and review tone, voice, and personas across all content created for nine different brands
Design, build, and implement VOC (Voice of the Customer) data flow, data validation, and system integrations in collaboration with GTS, AI&D, vendors, business teams, and development teams.
Drive agile project management, including creating user stories in Jira, leading grooming sessions, and ensuring successful sprint execution
Utilize expertise with VOC vendors such as Qualtrics, Medallia, and other customer experience platforms to design, analyze, and optimize VOC programs
Partner with cross-functional teams to implement technology solutions that enhance customer insights and improve overall experience
Ensure seamless integration of VOC systems with enterprise data architecture to support data-driven decision-making and actionable insights
Manage data ingestion processes, ensuring seamless integration across multiple systems
Lead system integrations, APIs, and real-time payload creation to optimize VOC programs
Desired Skills/Experience:
Bachelor's degree
6+ years of relevant experience in data engineering, system integration, and VOC program implementation
Proficiency in SQL to extract, manipulate, and analyze large datasets
Hands-on experience with system integrations, data engineering, and data migration, with a solid understanding of data architecture across multiple systems
A background in marketing, computer science, business transformation, data science, or customer experience in a business, agency, or consulting environment
Experience developing and automating data analytics and conducting ad-hoc analyses
Strong analytical skills with a proven ability to meet and exceed business objectives
A high level of accountability and ownership
The ability to build strong relationships with business partners, manage multiple projects simultaneously, and deliver results on time
Benefits:
Medical, Dental, & Vision Insurance Plans
401K offered
$24.50 - $35.00 (est. hourly)
Senior Data Engineer
Data Scientist Job 23 miles from Hawthorne
Our large digital transformation company is seeking a talented and experienced Data Engineer to join our team, specializing in Databricks and the financial industry. As a Data Engineer, you will play a crucial role in designing, developing, and maintaining data pipelines and infrastructure through data bricks to support financial clients' data-driven initiatives.
Key Responsibilities Include:
- Architect and design end-to-end data solutions using Databricks to process and analyze large-scale financial data.
- Collaborate with stakeholders to understand business requirements and translate them into technical solutions.
- Develop and implement data integration, transformation, and validation processes to ensure data accuracy and consistency.
- Lead the development and optimization of ETL pipelines to ingest data from various sources into data lakes and warehouses.
- Ensure the reliability, performance, and scalability of data architectures.
Must haves:
7+ years in data architecture helping to design solutions within financial industry
Strong proficiency in SQL, Python, and Spark
Deep understanding of data warehousing concepts and technologies (e.g., Snowflake, Redshift).
Databricks Certified
Senior Data Engineer
Data Scientist Job 23 miles from Hawthorne
Job Title: Data Engineer (Databricks, ETL, Data & AI Platforms)
Job Type: Contract (40 hours a week)
We are working with a financial services firm that specializes in providing insights and analytics for banks, lenders, and other financial institutions. Their core focus is helping clients optimize their pricing, profitability, and customer engagement strategies through advanced data analysis and market intelligence
We are looking for a skilled and motivated Data Engineer with 3 years of expertise in Databricks, ETL processes, and building scalable data and AI platforms. This role is pivotal in supporting the migration of products and will involve designing, implementing, and optimizing data pipelines to ensure seamless data integration across various systems. The ideal candidate will be passionate about leveraging cutting-edge technology to build robust and efficient data systems to power business intelligence, analytics, and AI-driven initiatives.
Key Responsibilities:
Databricks Development: Design, build, and optimize scalable data pipelines using Databricks to process large datasets and integrate various data sources into a unified data platform.
ETL Pipelines: Develop and manage ETL processes to ingest, transform, and load data from diverse sources (on-premise and cloud-based) into data lakes or data warehouses. Ensure data quality, consistency, and integrity throughout the entire pipeline.
Platform Development: Build and maintain data and AI platforms, ensuring that they are secure, efficient, and capable of handling high volumes of data. Collaborate closely with AI/ML teams to enable seamless integration of models into production systems.
Product Migration Support: Assist in the migration of legacy systems and products to modern cloud-based data solutions. Ensure smooth data transfer, transformation, and system integration during the migration process.
Data Modeling: Collaborate with data scientists, analysts, and business stakeholders to design and implement appropriate data models for business intelligence and machine learning workloads.
Optimization and Monitoring: Optimize performance of data pipelines and platforms for speed and cost-effectiveness. Continuously monitor the health of data systems, troubleshoot issues, and implement improvements.
Collaboration and Documentation: Work closely with cross-functional teams including data scientists, DevOps, and product managers to ensure alignment with business needs and best practices. Maintain comprehensive documentation for all data engineering processes, pipelines, and systems.
Security and Compliance: Ensure all data solutions meet security, privacy, and compliance standards. Implement proper access controls and data governance measures.
Key Skills & Qualifications:
Databricks: Proven experience working with Databricks, including developing and managing notebooks, jobs, and clusters, as well as leveraging Spark for distributed data processing.
ETL Tools and Frameworks: Strong experience with ETL technologies (e.g., Apache Airflow, AWS Glue, or Azure Data Factory), and proficiency in building end-to-end data pipelines.
Data Integration: Expertise in integrating structured and unstructured data from multiple sources, including relational databases, APIs, and cloud-based data sources.
Cloud Technologies: Experience with cloud platforms such as AWS, Azure, or Google Cloud, including data storage services (e.g., S3, Blob Storage, BigQuery) and compute services.
Data Warehousing: Experience with modern data warehousing solutions such as Snowflake, Redshift, or BigQuery.
Programming Languages: Proficient in Python, Scala, or Java for building data pipelines, along with knowledge of SQL for querying and managing relational databases.
AI/ML Collaboration: Familiarity with data science and machine learning concepts, with experience enabling AI workflows on data platforms.
Problem-Solving: Strong troubleshooting, debugging, and performance optimization skills.
Communication: Excellent communication and collaboration skills, able to work effectively with stakeholders at all levels of the organization.
Preferred Qualifications:
Experience with automated deployment pipelines and CI/CD in data engineering workflows.
Familiarity with data governance tools and frameworks (e.g., Apache Atlas, AWS Lake Formation).
Experience with containerization technologies such as Docker and Kubernetes.
Knowledge of data security principles and best practices in a cloud-based environment.
Data Engineer
Data Scientist Job 23 miles from Hawthorne
Key Responsibilities:
Collaborate with various management teams to ensure proper integration of functions to achieve goals and identify necessary system enhancements for new products and process improvements.
Address a variety of high-impact problems/projects through detailed evaluation of complex business and system processes, as well as industry standards.
Provide expertise in applications programming and ensure that application design aligns with the overall architecture blueprint.
Utilize advanced knowledge of system flow to develop standards for coding, testing, debugging, and implementation.
Develop a thorough understanding of how different business areas, such as architecture and infrastructure, work together to achieve business objectives.
Conduct in-depth analysis with interpretive thinking to identify issues and develop innovative solutions.
Act as an advisor or coach to mid-level developers and analysts, assigning work as needed.
Assess risks appropriately when making business decisions, ensuring compliance with applicable laws, rules, and regulations, adhering to policies, applying sound ethical judgment, and managing and reporting control issues transparently.
Qualifications:
6-10 years of relevant experience in application development or systems analysis, with expertise in Pyspark, SQL, and Python.
Extensive experience in system analysis and software application programming.
Proven track record in managing and implementing successful projects.
Subject Matter Expert (SME) in at least one area of application development.
Ability to quickly adjust priorities as needed.
Demonstrated leadership and project management skills.
Clear and concise written and verbal communication skills.
Education:
Bachelor's degree or equivalent experience.
Desired Skills and Experience
Python
SQL
Pyspark
Data Architecture
Application Development
Data Architect
Data Scientist Job 23 miles from Hawthorne
About Us:
LTIMindtree is a global technology consulting and digital solutions company that enables enterprises across industries to reimagine business models, accelerate innovation, and maximize growth by harnessing digital technologies. As a digital transformation partner to more than 700+ clients, LTIMindtree brings extensive domain and technology expertise to help drive superior competitive differentiation, customer experiences, and business outcomes in a converging world. Powered by nearly 90,000 talented and entrepreneurial professionals across more than 30 countries, LTIMindtree - a Larsen & Toubro Group company - combines the industry-acclaimed strengths of erstwhile Larsen and Toubro Infotech and Mindtree in solving the most complex business challenges and delivering transformation at scale. For more information, please visit ********************
Job Title:
Architect Data Governance Dataedo
Work Location
USA (New York, NY) (Onsite/Hybrid)
Job Description:
Mandatory Skills: Informatica Axon, Informatica Data Privacy, Informatica DATA Quality, Informatica PowerCenter
7 years of experience in architecting data governance solutions
Good knowledge of data governance principles practices and regulations
Handson experience with Dataedo for metadata management and data lineage
Ability to document maintain and manage metadata effectively
Skill in tracking and visualizing data flow and transformations
Strong interpersonal skills for collaborating with stakeholders and providing training
Desirable:
Certification in Data Governance
Benefits/perks listed below may vary depending on the nature of your employment with LTIMindtree (“LTIM”):
Benefits and Perks:
Comprehensive Medical Plan Covering Medical, Dental, Vision
Short Term and Long-Term Disability Coverage
401(k) Plan with Company match
Life Insurance
Vacation Time, Sick Leave, Paid Holidays
Paid Paternity and Maternity Leave
The range displayed on each job posting reflects the minimum and maximum salary target for the position across all US locations. Within the range, individual pay is determined by work location and job level and additional factors including job-related skills, experience, and relevant education or training.Depending on the position offered, other forms of compensation may be provided as part of overall compensation like an annual performance-based bonus, sales incentive pay and other forms of bonus or variable compensation.
Disclaimer: The compensation and benefits information provided herein is accurate as of the date of this posting.
LTIMindtree is an equal opportunity employer that is committed to diversity in the workplace. Our employment decisions are made without regard to race, color, creed, religion, sex (including pregnancy, childbirth or related medical conditions), gender identity or expression, national origin, ancestry, age, family-care status, veteran status, marital status, civil union status, domestic partnership status, military service, handicap or disability or history of handicap or disability, genetic information, atypical hereditary cellular or blood trait, union affiliation, affectional or sexual orientation or preference, or any other characteristic protected by applicable federal, state, or local law, except where such considerations are bona fide occupational qualifications permitted by law.
Safe return to office:
In order to comply with LTIMindtree' s company COVID-19 vaccine mandate, candidates must be able to provide proof of full vaccination against COVID-19 before or by the date of hire. Alternatively, one may submit a request for reasonable accommodation from LTIMindtree's COVID-19 vaccination mandate for approval, in accordance with applicable state and federal law, by the date of hire. Any request is subject to review through LTIMindtree's applicable processes.
Data Engineer
Data Scientist Job 15 miles from Hawthorne
Data Tech team is looking for a Data Engineer to join a diverse team dedicated to providing best in class data services to our customers, stakeholders and partners. As a part of our organization, you will work with our Client Engineers, Data Scientist and various BUs to define solutions for operationalizing data-driven decision making in a cost effective and scalable manner.
Primary skillset: Aws, Glue, step functions, Python, Redshift
Qualifications:
Bachelor's degree in Computer Science, Software Engineering, MIS or equivalent combination of education and experience
Experience implementing, supporting data lakes, data warehouses and data applications on AWS for large enterprises
AWS Solutions Architect or AWS Big Data Certification preferred
Programming experience with Java, Python/ Scala, Shell scripting
Solid experience of AWS services such as CloudFormation, S3, Glue, EMR/Spark, RDS, Redshift, DynamoDB, Lambda, Step Functions, IAM, KMS, SM etc.
Solid experience implementing solutions on AWS based data lakes
Experience implementing metadata solutions leveraging AWS non-relational data solutions such as ElastiCache and DynamoDB
Experience in AWS data lake, data warehouse and business analytics
Experience in system analysis, design, development, and implementation of data ingestion pipeline in AWS
Knowledge of ETL/ ELT
Working experience with Hadoop, HDFS, SQOOP, Hive, Python, and Spark is desired
Experience working on Agile projects
Requirements :
3 - 5 years of experience as Data Engineer
Experience developing business applications using NoSQL/ SQL databases.
Experience working with Object stores (S3) and JSON is must have
Should have good experience with AWS Services - API Gateway, Lambda, Step Functions, SQS, DynamoDB, S3, ElasticSearch.
Serverless application development using AWS Lambda.
Responsibilities:
Designing, building and maintaining efficient, reusable and reliable architecture and code
Ensure the best possible performance and quality of high scale web applications and services
Participate in the architecture and system design discussions
Independently perform hands on development and unit testing of the applications
Collaborate with the development team and build individual components into complex enterprise web systems
Work in a team environment with product, frontend design, production operation, QE/ QA and cross functional teams to deliver a project throughout the whole software development cycle
Responsible to identify and resolve any performance issues
Keep up to date with new technology development and implementation
Participate in code review to make sure standards and best practices are met
Plus
Experience with business intelligence tools such as Tableau, Power BI or equivalent
Aware of Machine learning algorithms (supervised/unsupervised)
Nice to have AWS developer certificate
Senior Big Data Engineer (PySpark & Hadoop) - No C2C
Data Scientist Job 18 miles from Hawthorne
Senior Big Data Engineer (PySpark & Hadoop)
HIRING DRIVE - DAY of INTERVIEW - Thurs, 03/13 and Fri, 03/14 - all interviews will be conducted on these days.
Job Description:
We are seeking an experienced Senior Big Data Engineer with a strong background in PySpark and Hadoop to join our direct client in the banking industry. The ideal candidate will have a deep understanding of large-scale data processing and optimization, along with hands-on experience in building high-performance data solutions.
Key Responsibilities:
Minimum 8+ years of experience in big data development using PySpark within the banking/financial sector.
Expertise in designing, developing, and optimizing large-scale data processing applications to ensure performance and efficiency.
In-depth proficiency with PySpark and the Apache Spark ecosystem for distributed data processing.
Strong programming skills in Python with a focus on PySpark.
Comprehensive understanding of Hadoop architecture, Hive, and HDFS for data storage and retrieval.
Advanced proficiency in SQL development, including query optimization and performance tuning for high-volume data processing.
This role is a great fit for someone who thrives in banking and financial environments, handling complex data pipelines and optimizing large-scale big data applications.
“Mindlance is an Equal Opportunity Employer and does not discriminate in employment on the basis of - Minority/Gender/Disability/Religion/LGBTQI/Age/Veterans.”
Lead Data Engineer
Data Scientist Job 9 miles from Hawthorne
This role is for candidates who can work on W2
About the Company:
A large-scale project that would process 100s TB of ERP data and other relevant data. Auditors would use this processed information to opine on the client engagements and provide value-added services to their clients. The project currently uses Azure data technologies and Databricks.
About the Role:
We are looking for an experienced professional with at least 6+ years of hands-on experience with Databricks technology.
Responsibilities:
Design, build, and manage scalable data architectures using Databricks and Azure data technologies.
System Design and implement modern data lakehouse solutions, ensuring seamless data integration, storage, and processing.
Collaborate with cross-functional teams to gather and translate business requirements into effective technical designs.
Ensure data quality and governance practices are implemented throughout the data lifecycle.
Optimize data workflows for performance, reliability, and security using Azure one lake, Databricks Workflow and Databricks Compute, Unity catalog, Azure Synapse and Powerbi.
Develop and enforce best practices in data modeling, pipeline design, and storage architecture.
Conduct regular assessments of data systems to identify and address performance bottlenecks or areas for improvement.
Qualifications:
10+ Years of total IT experience in Data engineering and Data warehouse project development.
Required Skills:
6+ years of hands-on experience with Azure Databricks and expertise in PySpark & Python development.
Proven expertise in designing and managing scalable data architectures. Experience in Databricks Serverless is plus.
6+ years' experience with Azure Synapse, Data Factory, and other Azure data technologies.
8+ years' experience in data modeling, 6+ Years experience with Data pipelines design and implementation, and cloud storage architecture.
Deep understanding of data quality and governance practices, and hands-on with Data quality and governance using Unity catalog or azure purview.
Ability to collaborate with cross-functional teams and translate business requirements into technical designs.
Preferred Skills:
Familiarity with other modern data processing tools and technologies.
Previous experience in a similar role working with large-scale data systems.
Celonis Data Engineer
Data Scientist Job 10 miles from Hawthorne
4 to 10 years with experience as a Process Analyst SME in DW domain with at least 2-3 years of experience working on Celonis Customer project-based experience working as an Analyst or Data Engineer on Celonis implementations Celonis Analyst Data Engineer.
Certified Knowledge of real-time business cases in Process Mining.
Proficiency with SQL or other programming languages (Python) and a strong interest in Data Mining and Process Mining.
Proficiency with UNIX/Shell scripting.
Writing SQL queries and shell scripts for Data acquisition.
Strong understanding of execution-oriented solutions in Celonis.
Excellent analytical skills well organized and known for being a quick learner.
Strong communication skills and enjoy interacting with various customers to understand and interpret business processes.
If your Interested, please share us your resume on
****************************
Life At Capgemini
Capgemini supports all aspects of your well-being throughout the changing stages of your life and career. For eligible employees, we offer:
Flexible work
Healthcare including dental, vision, mental health, and well-being programs
Financial well-being programs such as 401(k) and Employee Share Ownership Plan
Paid time off and paid holidays
Paid parental leave
Family building benefits like adoption assistance, surrogacy, and cryopreservation
Social well-being benefits like subsidized back-up child/elder care and tutoring
Mentoring, coaching and learning programs
Employee Resource Groups
Disaster Relief
Disclaimer
Capgemini is an Equal Opportunity Employer encouraging diversity in the workplace. All qualified applicants will receive consideration for employment without regard to race, national origin, gender identity/expression, age, religion, disability, sexual orientation, genetics, veteran status, marital status or any other characteristic protected by law.
This is a general description of the Duties, Responsibilities and Qualifications required for this position. Physical, mental, sensory or environmental demands may be referenced in an attempt to communicate the manner in which this position traditionally is performed. Whenever necessary to provide individuals with disabilities an equal employment opportunity, Capgemini will consider reasonable accommodations that might involve varying job requirements and/or changing the way this job is performed, provided that such accommodations do not pose an undue hardship.
Capgemini is committed to providing reasonable accommodations during our recruitment process. If you need assistance or accommodation, please reach out to your recruiting contact.
Click the following link for more information on your rights as an Applicant **************************************************************************
Data Analyst
Data Scientist Job 18 miles from Hawthorne
The pay for this role will likely be $25-30/hr. This is an ongoing contract role.
A client of ours is seeking a skilled Data Analyst to manage long-term clients' data systems and reporting frameworks, ensuring data integrity and accuracy. This role also involves consulting for organizations requiring episodic CRM support, including data conversions, reporting, and process documentation. The ideal candidate will transform raw data into actionable insights that drive fundraising and community development efforts. The role covers the full data analysis lifecycle, from requirement gathering to execution and design planning.
Key Responsibilities
Clean, prepare, and maintain data.
Perform data exploration and analysis.
Develop reports and data visualizations.
Collaborate with colleagues and clients to understand data needs and provide actionable insights.
Additional duties as assigned.
Experience & Qualifications
Minimum of 5 years of work experience, including 2-3 years of hands-on CRM experience with Blackbaud CRM, Raiser's Edge, Virtuous / or similar platforms.
Bachelor's degree required.
Background in data entry, finance, operations, or nonprofit fundraising preferred.
Familiarity with Salesforce and/or Great Plains software or any other CRM is a plus.
Skills & Competencies
Proficiency in Microsoft Office Suite and the ability to learn new software tools.
Strong statistical foundation with experience using Excel, SPSS, SAS, SQL, Python, or R.
Excellent analytical skills to compile, structure, and present large datasets with accuracy.
Ability to evaluate data critically and derive meaningful insights.
Strong report writing and presentation abilities.
Excellent organizational and multitasking skills.
Effective communication and collaboration skills across teams.
Understanding of data privacy regulations and best practices in donor data management.
Ability to present complex data insights in an accessible manner for non-technical audiences.
Lead Data Engineer - Data Reliability
Data Scientist Job 23 miles from Hawthorne
On any given day at Disney Entertainment & ESPN Technology, we're reimagining ways to create magical viewing experiences for the world's most beloved stories while also transforming Disney's media business for the future. Whether that's evolving our streaming and digital products in new and immersive ways, powering worldwide advertising and distribution to maximize flexibility and efficiency, or delivering Disney's unmatched entertainment and sports content, every day is a moment to make a difference to partners and to hundreds of millions of people around the world.
A few reasons why we think you'd love working for Disney Entertainment & ESPN Technology
Building the future of Disney's media business: DE&E Technologists are designing and building the infrastructure that will power Disney's media, advertising, and distribution businesses for years to come.
Reach & Scale: The products and platforms this group builds and operates delight millions of consumers every minute of every day - from Disney+ and Hulu, to ABC News and Entertainment, to ESPN and ESPN+, and much more.
Innovation: We develop and execute groundbreaking products and techniques that shape industry norms and enhance how audiences experience sports, entertainment & news.
The Data Reliability Engineering team for Disney's Product and Data Engineering team is responsible for maintaining and improving the reliability of Disney Entertainment's big data platform, which processes hundreds of terabytes of data and billions of events daily.
About The Role
The Lead Data Engineer will help us in the ongoing mission of delivering outstanding services to our users allowing Disney Entertainment to be more data-driven. You will work closely with our partner teams to monitor and drive improvements for reliability and observability of their critical data pipelines and deliverables. This is a high-impact role where your work informs decisions affecting millions of consumers, with a direct tie to The Walt Disney Company's revenue. You will be making an outsized impact in an organization that values data as its top priority. We are a tight and driven team with big goals, so we seek people who are passionate about solving the toughest challenges and working at scale, using, supporting, and building distributed systems in a fast-paced collaborative team environment. We also support a healthy work-life-balance.
Responsibilities
Assist in designing and developing a platform to support incident observability and automation. This team will be required to build high quality data models and products that monitor and reports on data pipeline health and data quality.
Lead project work efforts internally and externally setting project deliverables, review design documents, perform code reviews and help mentor junior members of the team.
Collaborate with engineering teams to improve, maintain, performance tune, and respond to incidents on our big data pipeline infrastructure.
Own building out key components for observability and intelligent monitoring of data pipelines and infrastructure to achieve early and automated anomaly detection and alerting. Present your research and insights to all levels of the company, clearly and concisely.
Build solutions to continually improve our software release and change management process using industry best practices to help DSS maintain legal compliance.
Basic Qualifications
7+ years of experience working on mission critical data pipelines and ETL systems.
7+ years of hands-on experience with big data technology, systems and tools such as AWS, Hadoop, Hive, and Snowflake
Detailed problem-solving approach, coupled with a strong sense of ownership and drive
A passionate bias to action and passion for delivering high-quality data solutions
Expertise with common Data Engineering languages such as Python, Scala, Java, SQL and a proven ability to learn new programming languages
Experience with workflow orchestration tools such as Airflow
Deep understanding of end-to-end pipeline design and implementation.
Attention to detail and quality with excellent problem solving and interpersonal skills
Preferred Qualifications
Advanced degrees are a plus.
Strong data visualizations skills to convey information and results clearly
Ability to work independently and drive your own projects.
Exceptional interpersonal and communication skills.
Impactful presentation skills in front of a large and diverse audience.
Experience with DevOps tools such as Docker, Kubernetes, Jenkins, etc.
Innate curiosity about consumer behavior and technology
Experience with event messaging frameworks like Apache Kafka
A fan of movies and television is a strong plus.
Required Education
Bachelor's degree in Computer Science, Information Systems, Software, Electrical or Electronics Engineering, or comparable field of study, and/or equivalent work experience
Additional Information
#DISNEYTECH
The hiring range for this position in Santa Monica, California is $152,200 to $204,100 per year, in Bristol, Connecticut is $152,200 to $204,100 per year, in Seattle, Washington is $159,500 to $213,900 per year. in New York City, NY is $159,500 to $213,900 per year, and in San Francisco, California is $166,800 to $223,600 per year. The base pay actually offered will take into account internal equity and also may vary depending on the candidate's geographic region, job-related knowledge, skills, and experience among other factors. A bonus and/or long-term incentive units may be provided as part of the compensation package, in addition to the full range of medical, financial, and/or other benefits, dependent on the level and position offered.
#J-18808-Ljbffr
Lead Data Engineer, Data Reliability
Data Scientist Job 23 miles from Hawthorne
Lead Data Engineer, Data ReliabilityJob ID 10103077 Location Santa Monica, California, United States / Bristol, Connecticut, United States / San Francisco, California, United States / New York, New York, United States / Seattle, Washington, United States Business Disney Entertainment & ESPN Technology Date posted Jan. 21, 2025Job Summary:
On any given day at Disney Entertainment & ESPN Technology, we're reimagining ways to create magical viewing experiences for the world's most beloved stories while also transforming Disney's media business for the future. Whether that's evolving our streaming and digital products in new and immersive ways or delivering Disney's unmatched entertainment and sports content, every day is a moment to make a difference to partners and to hundreds of millions of people around the world.
The Data Reliability Engineering team for Disney's Product and Data Engineering team is responsible for maintaining and improving the reliability of Disney Entertainment's big data platform, which processes hundreds of terabytes of data and billions of events daily.
The Lead Data Engineer will help us in the ongoing mission of delivering outstanding services to our users allowing Disney Entertainment to be more data-driven. You will work closely with our partner teams to monitor and drive improvements for reliability and observability of their critical data pipelines and deliverables. This is a high-impact role where your work informs decisions affecting millions of consumers, with a direct tie to The Walt Disney Company's revenue. We seek people who are passionate about solving the toughest challenges and working at scale, using, supporting, and building distributed systems in a fast-paced collaborative team environment.
Assist in designing and developing a platform to support incident observability and automation.
Lead project work efforts internally and externally setting project deliverables, review design documents, perform code reviews and help mentor junior members of the team.
Collaborate with engineering teams to improve, maintain, performance tune, and respond to incidents on our big data pipeline infrastructure.
Own building out key components for observability and intelligent monitoring of data pipelines and infrastructure.
Build solutions to continually improve our software release and change management process.
Minimum Qualifications:
7+ years of experience working on mission critical data pipelines and ETL systems.
7+ years of hands-on experience with big data technology, systems and tools such as AWS, Hadoop, Hive, and Snowflake.
Expertise with common Data Engineering languages such as Python, Scala, Java, SQL.
Experience with workflow orchestration tools such as Airflow.
Deep understanding of end-to-end pipeline design and implementation.
Bachelor's degree in Computer Science, Information Systems, Software, Electrical or Electronics Engineering, or comparable field of study.
#DISNEYTECH
#J-18808-Ljbffr
Senior Data Engineer
Data Scientist Job 28 miles from Hawthorne
We are :
At Synechron, we believe in the power of digital to transform businesses for the better. Our global consulting firm combines creativity and innovative technology to deliver industry-leading digital solutions. Synechron's progressive technologies and optimization strategies span end-to-end Artificial Intelligence, Consulting, Digital, Cloud & DevOps, Data, and Software Engineering, servicing an array of noteworthy financial services and technology firms. Through research and development initiatives in our FinLabs we develop solutions for modernization, from Artificial Intelligence and Blockchain to Data Science models, Digital Underwriting, mobile-first applications and more. Over the last 20+ years, our company has been honored with multiple employer awards, recognizing our commitment to our talented teams. With top clients to boast about, Synechron has a global workforce of 14,500+, and has 58 offices in 21 countries within key global markets.
Our Challenge :
We are seeking to hire strong Data Engineers with expertise in Snowflake and Python along with strong SQL Expertise.
Additional Information :
The base salary for this position will vary based on geography and other factors. In accordance with law, the base salary for this role if filled within Iselin, NJ is $130k - $145k/year & benefits (see below).
Responsibilities :
Hands-on development experience with Snowflake features such as Snow SQL; Snow Pipe; Python; Tasks; Streams; Time travel; Zero Copy Cloning; Optimizer; Metadata Manager; data sharing; and stored procedures.
Experience in Data warehousing - OLTP, OLAP, Dimensions, Facts, and Data modeling.
Need to have working knowledge of MS Azure configuration items with respect to Snowflake.
Developing EL pipelines in and out of data warehouse using combination of Data bricks, Python and Snow SQL.
Strong understanding or Snowflake on Azure Architecture, design, implementation and operationalization of large-scale data and analytics solutions on Snowflake Cloud Data Warehouse.
Developing scripts UNIX, Python etc. to Extract, Load and Transform data, as well as other utility functions.
Provide production support for Data Warehouse issues such data load problems, transformation translation problems
Translate mapping specifications to data transformation design and development strategies and code, incorporating standards and best practices for optimal execution.
Understanding data pipelines and modern ways of automating data pipeline using cloud based testing and clearly document implementations, so others can easily understand the requirements, implementation, and test conditions.
Perform code reviews to ensure fit to requirements, optimal execution patterns and adherence to established standards.
Requirements:
Minimum 8 years of designing and implementing an operational production grade large-scale data solution on Microsoft Azure Snowflake Data Warehouse.
Including hands on experience with productionized data ingestion and processing pipelines using Python, Data bricks, Snow SQL
Excellent understanding of Snowflake Internals and integration of Snowflake with other data processing and reporting technologies
Excellent presentation and communication skills, both written and verbal, ability to problem solve and design in an environment with unclear requirements.
It would be great if you also had:
Detail-oriented, ability to turn deliverables around quickly with a high degree of accuracy
Strong analytical skills, ability to interpret business requirements and produce functional and technical design documents.
Good time management skills - Ability to prioritize and multi-task, handling multiple efforts at once.
Strong desire to understand and learn domain.
Experience in a financial services/banking industry
Ability to work in a fast-paced environment; to be flexible and learn quickly.
Ability to multi-task with attention to detail/ prioritize tasks.
We can offer you:
A highly competitive compensation and benefits package
A multinational organization with 58 offices in 21 countries and the possibility to work abroad
Laptop and a mobile phone
15 days of paid annual leave (plus sick leave and national holidays)
A comprehensive insurance plan including: medical, dental, vision, life insurance, and long-/short-term disability (plans vary by region)
Retirement savings plans
A higher education certification policy
Extensive training opportunities, focused on skills, substantive knowledge, and personal development
On-demand Udemy for Business for all Synechron employees with free access to more than 5000 curated courses
Coaching opportunities with experienced colleagues from our Financial Innovation Labs (FinLabs) and Center of Excellences (CoE) groups
Cutting edge projects at the world's leading tier-one banks, financial institutions and insurance firms
A flat and approachable organization
A truly diverse, fun-loving and global work culture
SYNECHRON'S DIVERSITY & INCLUSION STATEMENT
Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative ‘Same Difference' is committed to fostering an inclusive culture - promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more. All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicant's gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law.
Data Architect
Data Scientist Job 28 miles from Hawthorne
We are a trusted Digital Engineering and Enterprise Modernization partner, combining deep technical expertise and industry experience to help our clients anticipate what's next. Our offerings and proven solutions create a unique competitive advantage for our clients by giving them the power to see beyond and rise above. We work with many industry-leading organizations across the world including 14 of the 30 most innovative US companies, 80% of the largest banks in the US and India, and numerous innovators across the healthcare ecosystem.
Our disruptor's mindset, commitment to client success, and agility to thrive in the dynamic environment have enabled us to sustain our growth momentum by reporting $291.71M revenue in Q2FY24, delivering 14.1% Y-o-Y growth. Our 22,800+ global team members, located in 21 countries, have been instrumental in helping the market leaders transform their industries. We're also pleased to share that Persistent won the 2023 Golden Peacock Award for Excellence in Corporate Governance within the IT sector. Acknowledging our cloud expertise, we were named a Challenger in the 2023 Gartner Magic Quadrant™ for Public Cloud IT Transformation Services. Throughout this market-leading growth, we've maintained strong employee satisfaction - over 94% of our employees approve of the CEO, and 89% would recommend working at Persistent to a friend.
Position: Data Architect
Location: Iselin, NJ (Hybrid)
Experience Level - 15-20 Years mostly in Data space
Responsibilities
Help in development of the overall Data architecture and frameworks in modern cloud architectures in Databricks and Snowflake.
Manages data architecture governance process: develops, communicates and ensures adherence to architecture processes, principles, policies and standards for the enterprise.
Hands on Data architect with strong Programming skills in PySpark, Databricks and SQL
Review business drivers and strategies, understands the implications to the application architecture, and identifies/mitigates risks to solutions
Champion and communicate the data architecture to the business leaders and IT teams; associating the implication of the architecture to objectives/drivers/goals
Accountable for the implementation of architecture roadmap
Skills
Strong Data architecture fundamentals
Strong Data warehousing and data management experience.
Excellent skills in Databricks and Pyspark
Should be an expert in SQL Development
Good skills in SQL Server, Database design, Data modeling is needed.
Experience in Data Fabric and Data As a Service Models.
Strong experience in, Snowflake is mandatory.
Understanding Data Quality, Data Reconciliation and testing approaches
Strong Capital Markets, Risk Technology experience is mandatory.
Understanding of IT Strategy, development lifecycle and alternative delivery methods
Demonstrates thorough understanding of the key technologies which form the infrastructure necessary to effectively support existing and future business requirements
Analytical skills like technical translator, abstraction, pattern recognition, logical and holistic thinking
Breadth of knowledge across all aspects of infrastructure design, delivery, operations and evolution
Let's unleash your full potential at Persistent - persistent.com/careers
“Persistent is an Equal Opportunity Employer and prohibits discrimination and harassment of any kind.”