Data Scientist - Finance & Engineering
Data Scientist Job 38 miles from Pinole
Business Integration Partners (BIP) is Europe's fastest growing digital consulting company and are on track to reach the Top 20 by 2025, with an expanding global footprint in the US (New York, Boston, Chicago, Tampa, Charlotte, Dallas, and Houston). Operating at the intersection of business and technology we design, develop, and deliver sustainable solutions at pace and scale creating greater value for our customers, employees, shareholders, and society.
Position Overview:
BIP US is seeking to grow its US consulting team and is looking for a Data Scientist with 5-7 years of experience to join our BIP Bay Area consulting team. This consultant will work at the intersection of Finance and Engineering, helping us optimize revenue, track key financial metrics, and improve decision-making with data-driven insights. This consultant must thrive in fast-paced environments, have a passion for digging into data to drive business strategy, and be able to bridge the gap between Finance and Engineering.
You must have valid US work authorization and must physically reside in the Bay Area, within a 50-mile commute. We are unable to support relocation costs.
Please do not apply for this position unless you meet the criteria outlined above.
Key Responsibilities:
Usage-Based Billing Analysis - Develop and refine billing models by product, ensuring alignment between revenue and usage data.
Tracking Credits & Cost Margin Analysis - Monitor credit usage and analyze cost margins to optimize pricing strategies.
Churn Modeling & Customer Retention - Build predictive models to identify risk factors for customer churn and develop strategies to improve retention.
Agent Conversion Analytics - Analyze user behavior and engagement data to improve conversion rates from free to paid customers.
Cycle Tracking & ARR Waterfall - Monitor customer lifecycle trends, track Annual Recurring Revenue (ARR) movements, and forecast financial outcomes.
Cross-functional Collaboration - Work closely with Finance, Product, and Engineering teams to ensure data insights drive strategic business decisions.
Qualifications:
Bachelor's degree and 5-7 years of experience in Data Science, Analytics, or a related field.
Strong expertise in data analytics, financial modeling, and business intelligence.
Experience with programs & tools such as Hashboard, Amplitude, Stripe, Google, Apple, DBT Labs, Bitquery, and Python.
Exceptional communication and storytelling skills - ability to present complex data in a clear and compelling way to executives.
Strong problem-solving skills with a curious and proactive mindset -analyzing data, asking the right questions and finding solutions accordingly.
Experience working cross-functionally with Finance (CFO/FP&A) and Engineering teams.
Ability to thrive in ambiguity and adapt quickly in a high-growth startup environment.
**The base salary range for this role is $150,000-$175,000**
Benefits:
Choice of medical, dental, vision insurance.
Voluntary benefits.
Short- and long-term disability.
HSA and FSAs.
Matching 401k.
Discretionary performance bonus.
Employee referral bonus.
Employee assistance program.
9 public holidays.
22 days PTO.
PTO buy and sell program.
Volunteer days.
Paid parental leave.
Remote/hybrid work environment support.
For more information about BIP US, visit *********************************
Equal Employment Opportunity:
It is BIP US Consulting policy to provide equal employment opportunities to all individuals based on job-related qualifications and ability to perform a job, without regard to age, gender, gender identity, sexual orientation, race, color, religion, creed, national origin, disability, genetic information, veteran status, citizenship, or marital status, and to maintain a non-discriminatory environment free from intimidation, harassment or bias based upon these grounds.
BIP US provides a reasonable range of compensation for our roles. Actual compensation is influenced by a wide array of factors including but not limited to skill set, education, level of experience, and knowledge.
Data Scientist
Data Scientist Job 44 miles from Pinole
Brahma Consulting Group is a Recruitment firm powered by a mission to deliver personalized, industry-focused recruitment solutions. We are currently assisting our client, a global leader in enterprise-grade data analytics and AI solutions, in their search for a Data Scientist in Mountain View, CA. This is a full-time permanent position.
Responsibilities
Analyze raw data: assessing quality, cleansing, structuring for downstream processing
Design accurate and scalable prediction algorithms
Collaborate with engineering team to bring analytical prototypes to production
Generate actionable insights for business improvements
Qualifications
Bachelor's degree or equivalent experience in quantitative field (Statistics, Mathematics, Computer Science, Engineering, etc.)
3+ Years of Banking / Payments analytics experience preferably in a consulting set-up
Strong experience of developing and deploying Predictive/ML solutions
Proficiency in SQL, Tableau, Python
Strong experience in Big Data Technologies like Hadoop, Spark or similar platforms
Excellent communication skills and strong stakeholder management experience
Strong problem-solving skills
Proficiency in Mandarin is a plus
Full Stack Data Scientist & Engineer
Data Scientist Job 38 miles from Pinole
Job Title: Full Stack Data Scientist & Engineer
About Confidential/Stealth Asset Manager:
Our client is a is a global investment firm, based in Menlo Park, with a mission to achieve the multi-generational goals and aspirations of our clients. Rooted in the endowment style of investing, we seek balanced diversification in order to protect and grow multi-generational capital over the long term. We are committed to the highest professional standards as fiduciaries of our clients' capital and our firm.
We are a collaborative and innovative team that thrives in a fast-paced environment and is committed to generating superior returns for our clients. We value intellectual curiosity, creative thinking, and a passion for using data to drive investment decisions. We foster a culture of mentorship and collaboration, providing opportunities for professional growth and development.
We are looking for a full-stack Data Engineer/Scientist. The role involves maintaining and extending scalable data warehouses and data lakes, designing robust data pipelines, and building dashboards and applications to visualize and analyze data
However, there will be opportunities to take on expanded roles depending on the firm's needs, personal interest and ability to take on more responsibilities:
We're looking for someone who:
Has strong problem-solving skills and is an independent thinker
Displays excellent communication and interpersonal skills, especially with their internal customers
Takes initiative and a high degree of ownership and pride in their work
Connects with the high-level goals and can work independently
Is curious about new technologies and the investment industry
Seeks to understand the broader business (e.g., the clients, their problems, the opportunities)
Responsibilities:
Maintain and extend scalable data warehouses and data lakes to support business intelligence and analytics
Design, develop, and implement robust data pipelines to collect, transform, and store data from various sources
Provide operational support for our cloud infrastructure
Build dashboards and applications to visualize and analyze data
Constructing quantitative investment tools
Support other analysts in their use of data and analytical tools
Document data pipelines and processes for knowledge sharing
Stay up-to-date on the latest data engineering and data science technologies
Project planning and project leadership
Experience:
4-6 years of experience in a data engineering or data science role
Demonstrated ability to independently lead and complete data-driven projects
Experience with the full data engineering and data science stack, including:
Cloud administration and DevOps practices
Data warehousing and data lakehouse architectures and platforms (e.g., Databricks, Snowflake)
ETL tools and techniques (e.g., Airflow, dbt)
Programming languages (Python, SQL)
Designing and building analytical applications
Data visualization tools (e.g., Power BI, Sigma)
Machine learning libraries (e.g., Scikit-learn)
Compensation: Competitive compensation, bonus and benefits package including health, life insurance, backup childcare coverage, and 401k benefits.
Senior Data Scientist
Data Scientist Job 17 miles from Pinole
We are seeking an experienced Senior Data Scientist with over 5 years of experience in Data Science, Machine Learning (ML), and Artificial Intelligence (AI) to join our team in Redwood City, SF Bay Area.
Key Responsibilities:
Architect and design scalable and enterprise-level AI/ML models to solve complex business challenges.
Develop and optimize AI/ML algorithms to ensure high performance, accuracy, and efficiency.
Lead the end-to-end AI/ML lifecycle, including data collection, preprocessing, modeling, deployment, monitoring, and maintenance.
Collaborate with software engineers, data engineers, and product managers to integrate AI solutions seamlessly into existing systems.
Stay ahead of industry trends, exploring and implementing the latest advancements in AI, ML, deep learning, and data science.
Ensure best practices in ML model validation, interpretability, and performance optimization.
Mentor and guide junior data scientists and ML engineers, fostering a culture of innovation and technical excellence.
Work with big data technologies and cloud platforms such as AWS, GCP, or Azure to scale AI/ML solutions.
Implement MLOps practices to streamline deployment, monitoring, and retraining of ML models in production.
Required Qualifications:
5+ years of experience in Data Science, Machine Learning, or AI with a strong track record of delivering impactful AI solutions.
Expert-level proficiency in Python and ML frameworks such as TensorFlow, PyTorch, Scikit-Learn.
Deep expertise in machine learning algorithms, deep learning, NLP, computer vision, and reinforcement learning.
Proven experience in AI/ML model architecture, optimization techniques, and deploying models at scale.
Strong knowledge of data engineering principles, including data pipelines, feature engineering, and ETL processes.
Hands-on experience with cloud platforms (AWS, GCP, Azure) and big data technologies (Spark, Hadoop, Snowflake).
Strong background in statistics, mathematics, and probability theory.
Experience with MLOps, model deployment, and monitoring in production environments.
Excellent problem-solving skills, critical thinking, and adaptability in a fast-paced environment.
Strong communication skills, with the ability to present complex AI concepts to non-technical stakeholders.
Preferred Qualifications:
Bachelors or Masters in Computer Science.
Experience with real-time AI applications.
Knowledge of AI ethics, explainability, and responsible AI frameworks.
Why Join Us?
Cutting-edge AI/ML projects: Work on innovative solutions with real-world impact across industries.
Collaborative and high-growth environment: Work alongside top talent in AI, data science, and engineering.
Competitive compensation & benefits: Market-leading salary, stock options, and career growth opportunities.
Sr. Data Scientist
Data Scientist Job 44 miles from Pinole
This is a senior position that requires existing and deep hands-on experience. This is an Applied, Data Science Engineering role: you need to be a data scientist first, with all the deep theoretical foundations to be able to design, build, train, and hypertune ML/DL models, and at the same time have all the practical, hands on, engineering know-how to be able to deploy the models in scalable production environment.
DO NOT apply if you are a:
recent graduate: this is not your first job.
data scientist but haven't been part of deploying your models in real world, production environment.
data engineer but haven't designed multimodal models and don't know the inner-workings of ML/DL algos.
Our interviews include in-person, face-to-face sessions: do not apply if you can't meet in person, or are unable to take interviews without the help of an AI chatbot.
NO RECRUITERS PLEASE.
Cognomotiv's Market
The complexity of modern vehicles is increasing at breakneck speed. The introduction of an array of brand-new features, along with refinements on old capabilities, has wildly expanded the role of AI, software, and in-vehicle services in today's vehicles. More and more functionality is shifting from electronic and mechanical systems to onboard computers controlled by AI. This trend is only accelerating: myriad of new services rely heavily on vehicles being constantly online, while autonomy is becoming increasingly real.
AI and software services offer amazing new opportunities to the automotive world, but bring with them complexity that creates maintenance and repair challenges and operational failures due to underlying hardware, software, and AI that control them.
Who We Are
At Cognomotiv, we help automotive OEMs, Fleet Operators, suppliers, and service centers guarantee the reliability of every vehicle on the road, thanks to state-of-the-art AI that identifies anomalous vehicle behavior in real time, detect and predict issues, and provide guidance on how to fix the problems.
The Cognomotiv solution is a blend of edge and backend AI that performs non-intrusive system diagnosis and prognosis, and makes remediation recommendations with powerful and efficient native models. Our AI is unique in its ability to perform fault and failure detection and prediction in both resource constrained environment as in the edge, as well as in the cloud, and also provide guided repair. Cognomotiv is the ultimate professional grade MRO (Maintenance, Repair, Overhaul) Copilot.
Description
We are seeking an ambitious, self-reliant data science engineer who operates with a sense of urgency and has the ability to thrive in a fast-paced environment. You will design and build the robust, supportable, highly scalable AI foundation of Cognomotiv's solution:
● Optimize and test ML and statistical models to report and predict systems' health in a variety of subsystems, including mechanical, sensor, and onboard computer.
● Research and develop methods for anomaly detection, failure diagnosis, and root cause analysis using systems data streams as well as information about the system.
● Design and implement models to detect breaches in system reliability.
● Organize and manage fleet data flow, storage, and automated procedures.
● Use variety of ML/DL techniques to mine data for fault patterns and precursors.
● Build and deploy RAG, Generative and Agentic AI to automate scalable guided-repair procedure generation.
● Organize and clearly present or publish findings.
● Assist in porting models to native language for edge computation and learning.
You possess deep technical skills that will be used to drive insights into new products, working closely with both data scientists and engineers. You will have the opportunity to synthesize learning from edge-trained models, a paradigm the entire IoT is striving for. You will have access to new and rich datasets, and use it to directly provide safety and reliability to all fully- or semi-automated systems.
Education & Experience Requirements:
● Degree (advanced preferred, Ph.D., or equivalent experience) in a quantitative field such as statistics or physics
● 5+ years of industry experience in a data science engineering role
● Proficiency in large scale data interaction -- SQL, Hadoop, etc.
● Existing experience in the designing, building, and deploying multimodal ML/DL a huge plus
● Existing experience in working with both streaming and batch mode ML/DL a huge plus
● Expert knowledge of one more more scripting languages (e.g. Python)
● Basic proficiency in at least one object-oriented programming language (e.g. C++, Java)
● Solid understanding and working knowledge of RAG, SLM, Graphs, GNN, Agentic AI
● Solid understanding and working knowledge of fundamental probability and statistics
● Effective communication skills: can turn vague, startup-grade input to concrete, value-added output
● The ideal candidate can independently recognize needs/possibilities and provide creative solutions.
Staff Data Scientist - Inference & Algorithms
Data Scientist Job 45 miles from Pinole
LinkedIn is the world's largest professional network, built to help members of all backgrounds and experiences achieve more in their careers. Our vision is to create economic opportunity for every member of the global workforce. Every day our members use our products to make connections, discover opportunities, build skills and gain insights. We believe amazing things happen when we work together in an environment where everyone feels a true sense of belonging, and that what matters most in a candidate is having the skills needed to succeed. It inspires us to invest in our talent and support career growth. Join us to challenge yourself with work that matters.
LinkedIn's Data Science team leverages big data to empower business decisions and deliver data-driven insights, metrics, and tools in order to drive member engagement, business growth, and monetization efforts. With over 1 billion members around the world, a focus on great user experience, and a mix of B2B and B2C programs, a career at LinkedIn offers countless ways for an ambitious data scientist to have an impact.
We are now looking for a talented and driven individual to accelerate our efforts and be a major part of our data-centric culture. It is expected that this person understands experimentation and/or machine learning techniques to be able to implement from scratch and have the ability to extend and enhance these techniques to specific applications like business problems. Successful candidates will exhibit technical acumen on inference and algorithms, and the business savviness to use these technical skills to drive better business decision-making.
At LinkedIn, our approach to flexible work is centered on trust and optimized for culture, connection, clarity, and the evolving needs of our business. The work location of this role is hybrid, meaning it will be performed both from home and from a LinkedIn office on select days, as determined by the business needs of the team.
Responsibilities
• Work with a team of high-performing analytics, data science professionals, and cross-functional teams to identify business opportunities and develop algorithms and methodologies to address them.
• Analyze large-scale structured and unstructured data.
• Conduct in-depth and rigorous causal analysis and develop causal methodology and machine learning models to drive member value and customer success.
• Develop methodologies to enhance LinkedIn's product and platform capabilities.
• Engage with technology partners to build, prototype and validate scalable tools/applications end to end (backend, frontend, data) for converting data to insights.
• Promote and enable adoption of technical advances in Data Science; elevate the art of Data Science practice at LinkedIn.
• Improve LinkedIn's ability to measure and credibly speak to labor market trends and other economic phenomena.
• Initiate and drive projects to completion independently.
• Act as a thought partner to senior leaders to prioritize/scope projects, provide recommendations and evangelize data-driven business decisions in support of strategic goals.
• Partner with cross-functional teams to initiate, lead or contribute to large-scale/complex strategic projects for team, department, and company.
• Provide technical guidance and mentorship to junior team members on solution design as well as lead code/design reviews.
Basic Qualifications:
• Bachelor's Degree in a quantitative discipline: Statistics, Operations Research, Computer Science, Informatics, Engineering, Applied Mathematics, Economics, etc.
• 5+ years of industry or relevant academia experience
• Background in at least one programming language (eg. R, Python, Java, Ruby, Scala/Spark or Perl)
• Experience in applied statistics and statistical modeling in at least one statistical software package, (eg. R, Python)
Preferred Qualifications:
• 7+ years of industry or relevant academia experience
• MS or PhD in a quantitative discipline: Statistics, Operations Research, Computer Science, Informatics, Engineering, Applied Mathematics, Economics, etc.
Suggested Skills
• Machine Learning
• Research
• Causal Inference
You will Benefit from our Culture:
We strongly believe in the well-being of our employees and their families. That is why we offer generous health and wellness programs and time away for employees of all levels.
LinkedIn is committed to fair and equitable compensation practices. The pay range for this role is $164,000 - $268,000. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to skill set, depth of experience, certifications, and specific work location. This may be different in other locations due to differences in the cost of labor.
The total compensation package for this position may also include annual performance bonus, stock, benefits and/or other applicable incentive compensation plans. For more information, visit **************************************
Equal Opportunity Statement
LinkedIn is committed to diversity in its workforce and is proud to be an equal opportunity employer. LinkedIn considers qualified applicants without regard to race, color, religion, creed, gender, national origin, age, disability, veteran status, marital status, pregnancy, sex, gender expression or identity, sexual orientation, citizenship, or any other legally protected class. LinkedIn is an Affirmative Action and Equal Opportunity Employer as described in our equal opportunity statement here: *********************************************************************************************************** Please reference ******************************************************************************************** and ************************************************************************************************ for more information.
LinkedIn is committed to offering an inclusive and accessible experience for all job seekers, including individuals with disabilities. Our goal is to foster an inclusive and accessible workplace where everyone has the opportunity to be successful.
If you need a reasonable accommodation to search for a job opening, apply for a position, or participate in the interview process, connect with us at accommodations@linkedin.com and describe the specific accommodation requested for a disability-related limitation.
Reasonable accommodations are modifications or adjustments to the application or hiring process that would enable you to fully participate in that process. Examples of reasonable accommodations include but are not limited to:
-Documents in alternate formats or read aloud to you
-Having interviews in an accessible location
-Being accompanied by a service dog
-Having a sign language interpreter present for the interview
A request for an accommodation will be responded to within three business days. However, non-disability related requests, such as following up on an application, will not receive a response.
LinkedIn will not discharge or in any other manner discriminate against employees or applicants because they have inquired about, discussed, or disclosed their own pay or the pay of another employee or applicant. However, employees who have access to the compensation information of other employees or applicants as a part of their essential job functions cannot disclose the pay of other employees or applicants to individuals who do not otherwise have access to compensation information, unless the disclosure is (a) in response to a formal complaint or charge, (b) in furtherance of an investigation, proceeding, hearing, or action, including an investigation conducted by LinkedIn, or (c) consistent with LinkedIn's legal duty to furnish information.
Pay Transparency Policy Statement
As a federal contractor, LinkedIn follows the Pay Transparency and non-discrimination provisions described at this link: ********************************
Global Data Privacy Notice for Job Candidates
This document provides transparency around the way in which LinkedIn handles personal data of employees and job applicants: ***************************************
Data Engineer - AI & ML
Data Scientist Job 17 miles from Pinole
Responsibilities:
1.Design and Build Data Pipelines:
•Develop, construct, test, and maintain data pipelines to extract, transform, and load (ETL) data from various sources to data warehouses or data lakes.
•Ensure data pipelines are efficient, scalable, and maintainable, enabling seamless data flow for downstream analysis and modeling.
•Work with stakeholders to identify data requirements and implement effective data processing solutions.
2. Data Integration:
•Integrate data from multiple sources such as internal databases, external APIs, third-party vendors, and flat files.
•Collaborate with business teams to understand data needs and ensure data is structured properly for reporting and analytics.
•Build and optimize data ingestion systems to handle both real-time and batch data processing.
3. Data Storage and Management:
•Design and manage data storage solutions (e.g., relational databases, NoSQL databases, data lakes, cloud storage) that support large-scale data processing.
•Implement best practices for data security, backup, and disaster recovery, ensuring that data is safe, recoverable, and complies with relevant regulations.
•Manage and optimize storage systems for scalability and cost efficiency.
4. Data Transformation:
•Develop data transformation logic to clean, enrich, and standardize raw data, ensuring it is suitable for analysis.
•Implement data transformation frameworks and tools, ensuring they work seamlessly across different data formats and sources.
•Ensure the accuracy and integrity of data as it is processed and stored.
5. Automation and Optimization:
•Automate repetitive tasks such as data extraction, transformation, and loading to improve pipeline efficiency.
•Optimize data processing workflows for performance, reducing processing time and resource consumption.
•Troubleshoot and resolve performance bottlenecks in data pipelines.
6. Collaboration with Data Teams:
•Work closely with Data Scientists, Analysts, and business teams to understand data requirements and ensure the correct data is available and accessible.
•Assist Data Scientists with preparing datasets for model training and deployment.
•Provide technical expertise and support to ensure the integrity and consistency of data across all projects.
7. Data Quality Assurance:
•Implement data validation checks to ensure data accuracy, completeness, and consistency throughout the pipeline.
•Develop and enforce data quality standards to detect and resolve data issues before they affect analysis or reporting.
•Monitor and improve data quality by identifying areas for improvement and implementing solutions.
8. Monitoring and Maintenance:
•Set up monitoring and logging for data pipelines to detect and alert for issues such as failures, data mismatches, or delays.
•Perform regular maintenance of data pipelines and storage systems to ensure optimal performance.
•Update and improve data systems as required, keeping up with evolving technology and business needs.
9. Documentation and Reporting:
•Document data pipeline designs, ETL processes, data schemas, and transformation logic for transparency and future reference.
•Create reports on the performance and status of data pipelines, identifying areas of improvement or potential issues.
•Provide guidance to other teams regarding the usage and structure of data systems.
10. Stay Updated with Technology Trends:
•Continuously evaluate and adopt new tools, technologies, and best practices in data engineering and big data systems.
•Participate in industry conferences, webinars, and training to stay current with emerging trends in data engineering and cloud computing.
Qualifications: (Please list all required qualifications) Click here to enter text.
(Rationalizes basic requirements for candidates to apply. Helps w/rationalization when
Requirements: -
1. Educational Background:
Bachelor's or Master's degree in Computer Science, Information Technology, Data Engineering, or a related field
2. Technical Skills:
•Proficiency in programming languages such as Python, Java, or Scala for data processing.
•Strong knowledge of SQL and relational databases (e.g., MySQL, PostgreSQL, MS SQL Server).
•Experience with NoSQL databases (e.g., MongoDB, Cassandra, HBase).
•Familiarity with data warehousing solutions (e.g., Amazon Redshift, Google BigQuery, Snowflake).
•Hands-on experience with ETL frameworks and tools (e.g., Apache NiFi, Talend, Informatica, Airflow).
•Knowledge of big data technologies (e.g., Hadoop, Apache Spark, Kafka).
•Experience with cloud platforms (AWS, Azure, Google Cloud) and related services for data storage and processing.
•Familiarity with containerization and orchestration tools (e.g., Docker, Kubernetes) for building scalable data systems.
•Knowledge of version control systems (e.g., Git) and collaboration tools (e.g., Jira, Confluence).
•Understanding data modeling concepts (e.g., star schema, snowflake schema) and how they relate to data warehousing and analytics.
•Knowledge of data lakes, data warehousing architecture, and how to design efficient and scalable storage solutions.
3.Soft Skills:
•Strong problem-solving skills with an ability to troubleshoot complex data issues.
•Excellent communication skills, with the ability to explain technical concepts to both technical and non-technical stakeholders.
•Strong attention to detail and a commitment to maintaining data accuracy and integrity.
•Ability to work effectively in a collaborative, team-based environment.
4.Experience:
-3+ years of experience in data engineering, with hands-on experience in building and maintaining data pipelines and systems.
- Proven track record of implementing data engineering solutions at scale, preferably in large or complex environments.
- Experience working with data governance, compliance, and security protocols.
5.Preferred Qualifications**:
-Experience with machine learning and preparing data for AI/ML model training.
-Familiarity with stream processing frameworks (e.g., Apache Kafka, Apache Flink).
-Certification in cloud platforms (e.g., AWS Certified Big Data - Specialty, Google Cloud Professional Data Engineer).
-Experience with DevOps practices and CI/CD pipelines for data systems.
-Experience with automation and orchestration tools (e.g., Apache Airflow, Luigi).
-Familiarity with data visualization and reporting tools (e.g., Tableau, Power BI) to support analytics teams
6.Work Environment:
•Collaborative and fast-paced work environment.
•Opportunity to work with state-of-the-art technologies.
•Supportive and dynamic team culture
EOE: Our client is an Equal Opportunity Employer and Prohibits Discrimination and Harassment of Any Kind: We are committed to the principle of equal employment opportunity for all employees and to providing employees with a work environment free of discrimination and harassment. All employment decisions at our client are based on business needs, job requirements, and individual qualifications, without regard to race, color, religion or belief, national, social or ethnic origin, sex (including pregnancy), age, physical, mental or sensory disability, HIV Status, sexual orientation, gender identity and/or expression, marital, civil union or domestic partnership status, past or present military service, family medical history or genetic information, family or parental status, or any other status protected by the laws or regulations in the locations where we operate. We will not tolerate discrimination or harassment based on any of these characteristics.
Senior Data Engineer
Data Scientist Job 39 miles from Pinole
Palo Alto or Remote (a willingness to work PST Hours is required)
Join the Future of AI-Driven Healthcare
Up to $200,000 base salary + a competitive equity package
About the Company
The company is at the forefront of revolutionizing AI in healthcare, tackling one of the industry's biggest challenges-ensuring that Generative AI is safe, reliable, and compliant. While AI has the power to transform healthcare, it also introduces risks like biased outputs, inaccurate information, and privacy concerns. The company's cutting-edge platform helps health systems confidently adopt AI by providing secure, HIPAA-compliant solutions, rigorous testing, and real-time monitoring. With a fast-growing team, a mission-driven culture, and a passion for innovation, the company is building the foundation for the future of AI-powered healthcare.
The Role
The company is seeking a Senior Data Engineer to build and scale its healthcare data infrastructure. This is a generalist role focused on ETL pipeline development, healthcare data integration, and automation. The role will be instrumental in standardizing data ingestion across health systems while integrating with platforms like Epic and ensuring compliance with healthcare regulations.
Key Responsibilities
Design and scale ETL pipelines for healthcare data ingestion.
Work with FHIR & HL7v2 data standards to integrate health system data.
Automate workflows and ensure seamless integration with Epic and other EHR systems.
Implement Infrastructure-as-Code (IaC) solutions to scale deployments.
Collaborate with AI/ML teams (though deep AI expertise is not required).
Requirements
4+ years working with healthcare data.
Strong Python development experience.
Expertise in FHIR & HL7v2 healthcare data standards.
Cloud experience (Azure preferred, but GCP/AWS also considered).
Experience with Snowflake & Databricks is a plus.
Proven ability to scale ETL pipelines in production.
Hands-on experience with Infrastructure-as-Code (IaC) and CI/CD pipelines.
AI/ML exposure is beneficial, particularly in a data engineering context.
Why Join the Company?
Work on cutting-edge AI solutions transforming healthcare.
Make a real-world impact on health system safety and compliance.
Competitive compensation and top-tier benefits.
Fast-moving startup culture-opportunities to drive meaningful change.
Office perks: Free meals, onsite Equinox, personal training, and more.
Candidates must be authorized to work in the US for consideration.
**An advert never does a role justice so if you're not sure, feel free to apply and one of our consultants will give you a call to give you a more detailed overview!**
Senior Data Engineer
Data Scientist Job 17 miles from Pinole
About Us
We're on a mission to redefine how AI interacts with the world-bringing human-like understanding, perception, and communication to the forefront of technology. Our cutting-edge research in multi-modal AI powers everything from text-to-video AI avatars to real-time interactive experiences that are reshaping industries like healthcare, education, sales, and beyond.
Backed by top investors, we're rapidly scaling and looking for exceptional talent to push the boundaries of AI-driven communication. If you're ready to own data strategy at a high-growth AI company, this is your chance to make an impact.
The Role: Data Architect, Builder, and Innovator
Data isn't just fuel for our AI-it's the foundation of everything we build. As Senior Data Engineer, you won't just clean datasets and maintain pipelines. You'll own the entire data lifecycle, from sourcing and structuring to optimizing for next-gen AI models.
We're looking for someone who thinks beyond conventional data engineering-you anticipate what's next, push boundaries, and make sure our AI has access to the best data possible.
Your Mission 🚀
Be a data visionary - You see beyond the present and anticipate the data needs of future AI models, ensuring high-quality, diverse, and structured datasets that drive innovation.
Own data end-to-end - You don't just process data-you strategically source, structure, and optimize multimodal datasets (text, video, images, and more) for maximum AI effectiveness.
Shape AI model training - Work closely with ML engineers to curate and fine-tune datasets that elevate AI performance, efficiency, and inference accuracy.
Be a data hunter - Whether it's web scraping, third-party integrations, or unconventional sources, you know how to acquire the right data and scale data procurement.
Master video data - AI-generated video presents unique challenges. You'll define classification, segmentation, and structuring techniques to optimize video datasets for AI training.
Optimize labeling & automation - Develop automated pipelines that make data cleaning, labeling, and structuring seamless while maintaining gold-standard quality.
Turn internal data into insights - Our own platform is a goldmine of information. You'll uncover, organize, and optimize internal data for strategic impact.
Move fast, but don't break data - You balance speed and precision, ensuring every pipeline, dataset, and workflow is scalable, reliable, and built to last.
What We're Looking For ⚡
You don't just maintain-you build. From zero to fully running pipelines, you make things happen. You take charge of how we leverage data for AI-driven decisions.
Extreme ownership - You don't wait for someone to tell you what data we need. You proactively identify, source, and structure data for AI breakthroughs.
Product-minded approach - You understand the bigger picture and align data strategies with company goals.
Experience with multimodal AI data - Background in LLMs, video, text, and image data is a huge plus.
Automation first - You know how to automate data cleaning, structuring, and labeling to ensure efficiency and scale.
ML-first mindset - You get that better data = better models and optimize datasets for maximal AI impact.
Technical expertise - You've got Python, SQL, and large-scale data processing tools locked down.
Speed + accuracy - You move fast but never compromise on precision.
You set the standard - We're solving problems no one has solved before. You create best practices, not just follow them.
Why Join Us?
When you join our team, you're joining a fast-moving, diverse, and collaborative environment where your work has direct impact. We offer:
Flexible work schedules & remote-friendly culture
Unlimited PTO & top-tier healthcare
Generous gear stipends
A high-energy, supportive team that loves what they do
We aren't just looking for someone to fit in-we want culture creators who bring new perspectives and push boundaries. If you're ready to take data engineering to the next level and drive the future of human-AI interaction, let's talk. 🚀
Apply now and be part of the AI revolution!
Senior Data Engineer
Data Scientist Job 45 miles from Pinole
Hi,
Job Title: Senior Data Engineer
Duration: Contract to Hire Role
Interview Process: Onsite
Payrate: 75 to 80/hr. on C2C
Responsibilities:
Key Responsibilities:
Design, develop, and maintain data pipelines using Spark, PySpark, and other big data technologies to process large datasets.
Build and optimize complex SQL queries for data extraction, transformation, and loading (ETL) in both batch and real-time workflows.
Work with cloud platforms such as AWS, Azure, or GCP to deploy and manage data infrastructure.
Collaborate with cross-functional teams to gather data requirements and deliver data solutions that meet business needs.
Ensure data quality, integrity, and consistency across different stages of data processing.
Optimize and troubleshoot performance of Spark and SQL-based applications.
Develop and implement data models and data architectures to support analytics and business intelligence initiatives.
Create and maintain automated data workflows and ensure they are scalable and maintainable.
Document data processes, architectures, and pipelines to ensure knowledge sharing across teams.
Required Skills & Qualifications:
Proven experience as a Data Engineer with expertise in Spark and PySpark for distributed data processing.
Strong proficiency in SQL, including experience with writing complex queries, performance tuning, and database management.
Hands-on experience with cloud platforms (AWS, Azure, or GCP) and their data processing services such as AWS EMR, Azure Databricks, Google BigQuery, etc.
In-depth understanding of ETL processes, data modelling, and data warehousing concepts.
Experience working with large datasets and optimizing data pipelines for efficiency and scalability.
Familiarity with data storage technologies like Hadoop, HDFS, and cloud-based data lakes.
Knowledge of version control systems like Git, and experience working in Agile environments.
Excellent problem-solving skills and the ability to work in a fast-paced, collaborative team environment.
Strong communication skills and the ability to explain technical concepts to non-technical stakeholders.
Preferred Qualifications:
Experience with containerization and orchestration tools like Docker, Kubernetes, or Apache Airflow.
Familiarity with machine learning concepts and integration of data pipelines with ML workflows.
Experience with real-time data streaming technologies (Kafka, Apache Flink, etc.).
Bachelor's degree in Computer Science, Engineering, Mathematics, or a related field.
Data Engineer
Data Scientist Job 17 miles from Pinole
***Hybrid Data Engineer consulting opportunity in San Francisco, CA****
Focus on SQL Knowledge and workflow management (airflow tool) and Machine Learning Knowledge: Specifically Feature. Understanding data inputs into the model and feature analysis. Feature Engineering. Training data set composition. Experience with Pyspark to perform data transformations, feature engineering, data analysis, and ML training data generation.
This role supports the feature generation, data workflows, and maintenance of the RealTIME personalization models for the Campaigns fleet.
***Streaming industry experience is a huge plus!***
EXTERNAL JOB DESCRIPTION:
Department Overview:
Applied Machine Learning Engineers, Data Engineers, and Data Scientists on the Disney Streaming Machine Learning and Innovation team develop and maintain recommendation and personalization algorithms for Streaming's suite of streaming video apps. As a member of this team you will collaborate across Engineering, Product, and Data teams to apply machine learning methods to meet strategic product personalization goals, explore innovative, cutting edge techniques that can be applied to recommendations, and constantly seek ways to optimize operational processes.
Our team is…
• A group of engineers and data scientists with diverse expertise delivering solutions together.
• Collaborative and dynamic.
• Embracing agile practices.
• Using continuous integration/automated testing.
• Led by startup veterans.
Basic Qualifications
Required Qualifications:
● 2+ years of data engineering experience
● Deep knowledge of the Python data ecosystem
● Great coding and problem-solving skills
● Experience in building large datasets and scalable services
● Experience deploying and running services in AWS and in engineering big-data solutions using technologies like Databricks, EMR, S3, Spark, and Docker
● Experience with Pyspark to perform data transformations, feature engineering, data analysis, and ML training data generation
● Excellent communication and people engagement skills
Preferred Qualifications
● Experience building streaming pipelines using Kafka, Spark, or Flink
● ML algorithmic and systems knowledge/experience
Responsibilities:
● Partner with technical and non-technical colleagues to understand ML algorithm feature, data, and workflow requirements.
● Implement necessary feature analysis, new features, training datasets and related workflows and monitoring metrics and services
● Work with Engineering teams to collect required data from internal and external systems
● Develop and maintain ETL routines using orchestration tools such as Airflow
● Collaborate with machine learning practitioners to design and build model and data forward
solutions
● Deploy scalable streaming and batch data pipelines to support petabyte scale datasets
● Enforce common data design patterns to increase code maintainability
● Create ETL architecture designs and conduct reviews
● Perform ad hoc data analysis and maintenance as necessary
● Partner with team leads to identify, design, and implement internal process
improvements
● Drive and maintain a culture of quality, innovation, and experimentation
● Work in an Agile environment that focuses on collaboration and teamwork
Required Education Bachelor's degree or relevant years of work experience
Senior Data Engineer
Data Scientist Job 38 miles from Pinole
Job Title - Data Engineer
Location: Seattle, Santa Monica, New York, or San Francisco/Onsite/ Hybrid 4 days per week Type: Fulltime
Type: Fulltime
Key Points:
Experience with Scala or Python?
DataBricks/Hadoop
Spark
AIrflow
SQL
Experience with Snowflake and how have you used it in past projects
Experience in data modelling
Are they open for the HackerRank test
Basic Qualifications:
5+ years of relevant data engineering experience.
Strong understanding of data modeling principles including Dimensional modeling, data normalization principles.
Good understanding of SQL Engines and able to conduct advanced performance tuning.
Ability to think strategically, analyze and interpret market and consumer information.
Strong communication skills - written and verbal presentations.
Excellent conceptual and analytical reasoning competencies.
Comfortable working in a fast-paced and highly collaborative environment.
Familiarity with Agile Scrum principles and ceremonies
Preferred Qualifications -
2+ years of work experience implementing and reporting on business key performance indicators in data warehousing environments, required.
2+ years of experience using analytic SQL, working with traditional relational databases and/or distributed systems (Snowflake or Redshift), required.
1+ years of experience programming languages (e.g. Python, Pyspark), preferred.
1+ years of experience with data orchestration/ETL tools (Airflow, Nifi), preferred.
Experience with Snowflake, Databricks/EMR/Spark, and/or Airflow.
Required Education - Degree in computer science, information systems
Additional Information - Onsite/ Hybrid 4 days per week - in order of preference:
Seattle, Santa Monica, New York or San Francisco.
Intake Notes:
Development experience with Airflow, Snowflake, and Databricks.
At least five years of relevant recent experience with the mentioned tech stack.
Proficiency in Python, as the interview process includes a Python coding test on HackerRank.
Data Engineer
Data Scientist Job 45 miles from Pinole
Calling All Data Engineers!
Join a groundbreaking fintech revolution! We're seeking a BI/Data Engineer to be part of a Global Engineering Team building the future of blockchain payments. Our client has developed a cutting-edge blockchain payment platform, reshaping traditional payment systems and driving the mobile fintech revolution.
Are you the type of engineer who thinks,
"If I could rebuild this entire system, here's how I'd do it right"?
If so, this role is for you.
What You'll Do:
✅ Build and operate an Analytics Platform using AWS Native services to process streamed data across a blockchain-based payment network.
✅ Organize incoming data into structured formats for querying, reporting, and system feedback.
✅ Collaborate with product teams to implement fraud detection algorithms that enhance system security.
✅ Provide analytics for blockchain payment transaction performance management.
✅ Transform unstructured data into structured formats, optimizing data streams at their source.
✅ Design dashboards and query interfaces for operational data.
✅ Optimize data transmission by filtering out unnecessary sources.
What You Bring:
✔ 2-3 years of experience with Elastic, Logstash, Kibana.
✔ Hands-on experience working with unstructured and semi-structured data.
✔ Background in fraud detection, risk analysis, and payments.
✔ Startup experience - you thrive in a fast-paced, high-growth environment.
Why Join?
🚀 Be an early engineer in a small company with a big vision.
📈 Opportunities to grow a team and shape industry-leading systems.
🏡 Hybrid workspace in Silicon Valley with a competitive benefits package.
🔥 Work with driven innovators who love the startup grind!
Sound like you? Reach out to **************************
Data Scientist - Finance & Engineering
Data Scientist Job 30 miles from Pinole
Business Integration Partners (BIP) is Europe's fastest growing digital consulting company and are on track to reach the Top 20 by 2025, with an expanding global footprint in the US (New York, Boston, Chicago, Tampa, Charlotte, Dallas, and Houston). Operating at the intersection of business and technology we design, develop, and deliver sustainable solutions at pace and scale creating greater value for our customers, employees, shareholders, and society.
Position Overview:
BIP US is seeking to grow its US consulting team and is looking for a Data Scientist with 5-7 years of experience to join our BIP Bay Area consulting team. This consultant will work at the intersection of Finance and Engineering, helping us optimize revenue, track key financial metrics, and improve decision-making with data-driven insights. This consultant must thrive in fast-paced environments, have a passion for digging into data to drive business strategy, and be able to bridge the gap between Finance and Engineering.
You must have valid US work authorization and must physically reside in the Bay Area, within a 50-mile commute. We are unable to support relocation costs.
Please do not apply for this position unless you meet the criteria outlined above.
Key Responsibilities:
Usage-Based Billing Analysis - Develop and refine billing models by product, ensuring alignment between revenue and usage data.
Tracking Credits & Cost Margin Analysis - Monitor credit usage and analyze cost margins to optimize pricing strategies.
Churn Modeling & Customer Retention - Build predictive models to identify risk factors for customer churn and develop strategies to improve retention.
Agent Conversion Analytics - Analyze user behavior and engagement data to improve conversion rates from free to paid customers.
Cycle Tracking & ARR Waterfall - Monitor customer lifecycle trends, track Annual Recurring Revenue (ARR) movements, and forecast financial outcomes.
Cross-functional Collaboration - Work closely with Finance, Product, and Engineering teams to ensure data insights drive strategic business decisions.
Qualifications:
Bachelor's degree and 5-7 years of experience in Data Science, Analytics, or a related field.
Strong expertise in data analytics, financial modeling, and business intelligence.
Experience with programs & tools such as Hashboard, Amplitude, Stripe, Google, Apple, DBT Labs, Bitquery, and Python.
Exceptional communication and storytelling skills - ability to present complex data in a clear and compelling way to executives.
Strong problem-solving skills with a curious and proactive mindset -analyzing data, asking the right questions and finding solutions accordingly.
Experience working cross-functionally with Finance (CFO/FP&A) and Engineering teams.
Ability to thrive in ambiguity and adapt quickly in a high-growth startup environment.
**The base salary range for this role is $150,000-$175,000**
Benefits:
Choice of medical, dental, vision insurance.
Voluntary benefits.
Short- and long-term disability.
HSA and FSAs.
Matching 401k.
Discretionary performance bonus.
Employee referral bonus.
Employee assistance program.
9 public holidays.
22 days PTO.
PTO buy and sell program.
Volunteer days.
Paid parental leave.
Remote/hybrid work environment support.
For more information about BIP US, visit *********************************
Equal Employment Opportunity:
It is BIP US Consulting policy to provide equal employment opportunities to all individuals based on job-related qualifications and ability to perform a job, without regard to age, gender, gender identity, sexual orientation, race, color, religion, creed, national origin, disability, genetic information, veteran status, citizenship, or marital status, and to maintain a non-discriminatory environment free from intimidation, harassment or bias based upon these grounds.
BIP US provides a reasonable range of compensation for our roles. Actual compensation is influenced by a wide array of factors including but not limited to skill set, education, level of experience, and knowledge.
Senior Data Scientist
Data Scientist Job 38 miles from Pinole
We are seeking an experienced Senior Data Scientist with over 5 years of experience in Data Science, Machine Learning (ML), and Artificial Intelligence (AI) to join our team in Redwood City, SF Bay Area.
Key Responsibilities:
Architect and design scalable and enterprise-level AI/ML models to solve complex business challenges.
Develop and optimize AI/ML algorithms to ensure high performance, accuracy, and efficiency.
Lead the end-to-end AI/ML lifecycle, including data collection, preprocessing, modeling, deployment, monitoring, and maintenance.
Collaborate with software engineers, data engineers, and product managers to integrate AI solutions seamlessly into existing systems.
Stay ahead of industry trends, exploring and implementing the latest advancements in AI, ML, deep learning, and data science.
Ensure best practices in ML model validation, interpretability, and performance optimization.
Mentor and guide junior data scientists and ML engineers, fostering a culture of innovation and technical excellence.
Work with big data technologies and cloud platforms such as AWS, GCP, or Azure to scale AI/ML solutions.
Implement MLOps practices to streamline deployment, monitoring, and retraining of ML models in production.
Required Qualifications:
5+ years of experience in Data Science, Machine Learning, or AI with a strong track record of delivering impactful AI solutions.
Expert-level proficiency in Python and ML frameworks such as TensorFlow, PyTorch, Scikit-Learn.
Deep expertise in machine learning algorithms, deep learning, NLP, computer vision, and reinforcement learning.
Proven experience in AI/ML model architecture, optimization techniques, and deploying models at scale.
Strong knowledge of data engineering principles, including data pipelines, feature engineering, and ETL processes.
Hands-on experience with cloud platforms (AWS, GCP, Azure) and big data technologies (Spark, Hadoop, Snowflake).
Strong background in statistics, mathematics, and probability theory.
Experience with MLOps, model deployment, and monitoring in production environments.
Excellent problem-solving skills, critical thinking, and adaptability in a fast-paced environment.
Strong communication skills, with the ability to present complex AI concepts to non-technical stakeholders.
Preferred Qualifications:
Bachelors or Masters in Computer Science.
Experience with real-time AI applications.
Knowledge of AI ethics, explainability, and responsible AI frameworks.
Why Join Us?
Cutting-edge AI/ML projects: Work on innovative solutions with real-world impact across industries.
Collaborative and high-growth environment: Work alongside top talent in AI, data science, and engineering.
Competitive compensation & benefits: Market-leading salary, stock options, and career growth opportunities.
Data Engineer
Data Scientist Job 38 miles from Pinole
$165,000 - $185,000 BASE SALARY + EQUITY
REMOTE OR HYBRID (3 DAYS ONSITE IF LOCAL)
A fast-growing healthtech startup developing solutions to help healthcare organizations safely implement generative AI. Their platform ensures compliance, enhances model reliability, and provides ongoing monitoring to support responsible AI adoption.
THE ROLE
As a Data Engineer, you will design and optimize scalable ETL pipelines, standardize data processing workflows, and integrate healthcare data across various platforms. You'll play a key role in automating data ingestion, enhancing system reliability, and ensuring high-quality datasets for AI applications. This position requires problem-solving ability, adaptability, and a strong interest in the intersection of healthcare and AI.
ROLE RESPONSIBILITIES
Develop and maintain ETL pipelines that extract data from FHIR endpoints.
Build and enhance data ingestion processes to streamline healthcare system integration.
Automate data workflows for seamless interaction with Epic and other EHR platforms.
Establish repeatable data transformation patterns to ensure consistency across systems.
Deploy Infrastructure-as-Code (IaC) solutions for efficient scaling.
Work closely with AI/ML teams to provide clean, structured data for model development.
SKILLS & EXPERIENCE
4+ years of experience working with healthcare datasets.
Strong programming skills in Python.
Hands-on experience with FHIR & HL7v2 data standards.
Cloud expertise in Azure (preferred), GCP, or AWS.
Experience working with Snowflake & Databricks is a plus.
Proven ability to scale ETL pipelines in live production environments.
Familiarity with Infrastructure-as-Code (IaC) and CI/CD pipelines.
Exposure to AI/ML data engineering is a plus, particularly in model training workflows.
BENEFITS
$165,000 - $185,000 base salary + equity.
Flexible remote or hybrid work model.
Opportunity to shape the future of AI governance in healthcare.
If you're a data engineer passionate about AI in healthcare, we'd love to connect!
Senior Data Engineer
Data Scientist Job 38 miles from Pinole
About Us
We're on a mission to redefine how AI interacts with the world-bringing human-like understanding, perception, and communication to the forefront of technology. Our cutting-edge research in multi-modal AI powers everything from text-to-video AI avatars to real-time interactive experiences that are reshaping industries like healthcare, education, sales, and beyond.
Backed by top investors, we're rapidly scaling and looking for exceptional talent to push the boundaries of AI-driven communication. If you're ready to own data strategy at a high-growth AI company, this is your chance to make an impact.
The Role: Data Architect, Builder, and Innovator
Data isn't just fuel for our AI-it's the foundation of everything we build. As Senior Data Engineer, you won't just clean datasets and maintain pipelines. You'll own the entire data lifecycle, from sourcing and structuring to optimizing for next-gen AI models.
We're looking for someone who thinks beyond conventional data engineering-you anticipate what's next, push boundaries, and make sure our AI has access to the best data possible.
Your Mission 🚀
Be a data visionary - You see beyond the present and anticipate the data needs of future AI models, ensuring high-quality, diverse, and structured datasets that drive innovation.
Own data end-to-end - You don't just process data-you strategically source, structure, and optimize multimodal datasets (text, video, images, and more) for maximum AI effectiveness.
Shape AI model training - Work closely with ML engineers to curate and fine-tune datasets that elevate AI performance, efficiency, and inference accuracy.
Be a data hunter - Whether it's web scraping, third-party integrations, or unconventional sources, you know how to acquire the right data and scale data procurement.
Master video data - AI-generated video presents unique challenges. You'll define classification, segmentation, and structuring techniques to optimize video datasets for AI training.
Optimize labeling & automation - Develop automated pipelines that make data cleaning, labeling, and structuring seamless while maintaining gold-standard quality.
Turn internal data into insights - Our own platform is a goldmine of information. You'll uncover, organize, and optimize internal data for strategic impact.
Move fast, but don't break data - You balance speed and precision, ensuring every pipeline, dataset, and workflow is scalable, reliable, and built to last.
What We're Looking For ⚡
You don't just maintain-you build. From zero to fully running pipelines, you make things happen. You take charge of how we leverage data for AI-driven decisions.
Extreme ownership - You don't wait for someone to tell you what data we need. You proactively identify, source, and structure data for AI breakthroughs.
Product-minded approach - You understand the bigger picture and align data strategies with company goals.
Experience with multimodal AI data - Background in LLMs, video, text, and image data is a huge plus.
Automation first - You know how to automate data cleaning, structuring, and labeling to ensure efficiency and scale.
ML-first mindset - You get that better data = better models and optimize datasets for maximal AI impact.
Technical expertise - You've got Python, SQL, and large-scale data processing tools locked down.
Speed + accuracy - You move fast but never compromise on precision.
You set the standard - We're solving problems no one has solved before. You create best practices, not just follow them.
Why Join Us?
When you join our team, you're joining a fast-moving, diverse, and collaborative environment where your work has direct impact. We offer:
Flexible work schedules & remote-friendly culture
Unlimited PTO & top-tier healthcare
Generous gear stipends
A high-energy, supportive team that loves what they do
We aren't just looking for someone to fit in-we want culture creators who bring new perspectives and push boundaries. If you're ready to take data engineering to the next level and drive the future of human-AI interaction, let's talk. 🚀
Apply now and be part of the AI revolution!
Senior Data Engineer
Data Scientist Job 38 miles from Pinole
Job Title - Data Engineer
Location: Seattle, Santa Monica, New York, or San Francisco/Onsite/ Hybrid 4 days per week Type: Fulltime
Type: Fulltime
Key Points:
Experience with Scala or Python?
DataBricks/Hadoop
Spark
AIrflow
SQL
Experience with Snowflake and how have you used it in past projects
Experience in data modelling
Are they open for the HackerRank test
Basic Qualifications:
5+ years of relevant data engineering experience.
Strong understanding of data modeling principles including Dimensional modeling, data normalization principles.
Good understanding of SQL Engines and able to conduct advanced performance tuning.
Ability to think strategically, analyze and interpret market and consumer information.
Strong communication skills - written and verbal presentations.
Excellent conceptual and analytical reasoning competencies.
Comfortable working in a fast-paced and highly collaborative environment.
Familiarity with Agile Scrum principles and ceremonies
Preferred Qualifications -
2+ years of work experience implementing and reporting on business key performance indicators in data warehousing environments, required.
2+ years of experience using analytic SQL, working with traditional relational databases and/or distributed systems (Snowflake or Redshift), required.
1+ years of experience programming languages (e.g. Python, Pyspark), preferred.
1+ years of experience with data orchestration/ETL tools (Airflow, Nifi), preferred.
Experience with Snowflake, Databricks/EMR/Spark, and/or Airflow.
Required Education - Degree in computer science, information systems
Additional Information - Onsite/ Hybrid 4 days per week - in order of preference:
Seattle, Santa Monica, New York or San Francisco.
Intake Notes:
Development experience with Airflow, Snowflake, and Databricks.
At least five years of relevant recent experience with the mentioned tech stack.
Proficiency in Python, as the interview process includes a Python coding test on HackerRank.
Data Scientist - Finance & Engineering
Data Scientist Job 38 miles from Pinole
Business Integration Partners (BIP) is Europe's fastest growing digital consulting company and are on track to reach the Top 20 by 2025, with an expanding global footprint in the US (New York, Boston, Chicago, Tampa, Charlotte, Dallas, and Houston). Operating at the intersection of business and technology we design, develop, and deliver sustainable solutions at pace and scale creating greater value for our customers, employees, shareholders, and society.
Position Overview:
BIP US is seeking to grow its US consulting team and is looking for a Data Scientist with 5-7 years of experience to join our BIP Bay Area consulting team. This consultant will work at the intersection of Finance and Engineering, helping us optimize revenue, track key financial metrics, and improve decision-making with data-driven insights. This consultant must thrive in fast-paced environments, have a passion for digging into data to drive business strategy, and be able to bridge the gap between Finance and Engineering.
You must have valid US work authorization and must physically reside in the Bay Area, within a 50-mile commute. We are unable to support relocation costs.
Please do not apply for this position unless you meet the criteria outlined above.
Key Responsibilities:
Usage-Based Billing Analysis - Develop and refine billing models by product, ensuring alignment between revenue and usage data.
Tracking Credits & Cost Margin Analysis - Monitor credit usage and analyze cost margins to optimize pricing strategies.
Churn Modeling & Customer Retention - Build predictive models to identify risk factors for customer churn and develop strategies to improve retention.
Agent Conversion Analytics - Analyze user behavior and engagement data to improve conversion rates from free to paid customers.
Cycle Tracking & ARR Waterfall - Monitor customer lifecycle trends, track Annual Recurring Revenue (ARR) movements, and forecast financial outcomes.
Cross-functional Collaboration - Work closely with Finance, Product, and Engineering teams to ensure data insights drive strategic business decisions.
Qualifications:
Bachelor's degree and 5-7 years of experience in Data Science, Analytics, or a related field.
Strong expertise in data analytics, financial modeling, and business intelligence.
Experience with programs & tools such as Hashboard, Amplitude, Stripe, Google, Apple, DBT Labs, Bitquery, and Python.
Exceptional communication and storytelling skills - ability to present complex data in a clear and compelling way to executives.
Strong problem-solving skills with a curious and proactive mindset -analyzing data, asking the right questions and finding solutions accordingly.
Experience working cross-functionally with Finance (CFO/FP&A) and Engineering teams.
Ability to thrive in ambiguity and adapt quickly in a high-growth startup environment.
**The base salary range for this role is $150,000-$175,000**
Benefits:
Choice of medical, dental, vision insurance.
Voluntary benefits.
Short- and long-term disability.
HSA and FSAs.
Matching 401k.
Discretionary performance bonus.
Employee referral bonus.
Employee assistance program.
9 public holidays.
22 days PTO.
PTO buy and sell program.
Volunteer days.
Paid parental leave.
Remote/hybrid work environment support.
For more information about BIP US, visit *********************************
Equal Employment Opportunity:
It is BIP US Consulting policy to provide equal employment opportunities to all individuals based on job-related qualifications and ability to perform a job, without regard to age, gender, gender identity, sexual orientation, race, color, religion, creed, national origin, disability, genetic information, veteran status, citizenship, or marital status, and to maintain a non-discriminatory environment free from intimidation, harassment or bias based upon these grounds.
BIP US provides a reasonable range of compensation for our roles. Actual compensation is influenced by a wide array of factors including but not limited to skill set, education, level of experience, and knowledge.
Senior Data Scientist
Data Scientist Job 38 miles from Pinole
We are seeking an experienced Senior Data Scientist with over 5 years of experience in Data Science, Machine Learning (ML), and Artificial Intelligence (AI) to join our team in Redwood City, SF Bay Area.
Key Responsibilities:
Architect and design scalable and enterprise-level AI/ML models to solve complex business challenges.
Develop and optimize AI/ML algorithms to ensure high performance, accuracy, and efficiency.
Lead the end-to-end AI/ML lifecycle, including data collection, preprocessing, modeling, deployment, monitoring, and maintenance.
Collaborate with software engineers, data engineers, and product managers to integrate AI solutions seamlessly into existing systems.
Stay ahead of industry trends, exploring and implementing the latest advancements in AI, ML, deep learning, and data science.
Ensure best practices in ML model validation, interpretability, and performance optimization.
Mentor and guide junior data scientists and ML engineers, fostering a culture of innovation and technical excellence.
Work with big data technologies and cloud platforms such as AWS, GCP, or Azure to scale AI/ML solutions.
Implement MLOps practices to streamline deployment, monitoring, and retraining of ML models in production.
Required Qualifications:
5+ years of experience in Data Science, Machine Learning, or AI with a strong track record of delivering impactful AI solutions.
Expert-level proficiency in Python and ML frameworks such as TensorFlow, PyTorch, Scikit-Learn.
Deep expertise in machine learning algorithms, deep learning, NLP, computer vision, and reinforcement learning.
Proven experience in AI/ML model architecture, optimization techniques, and deploying models at scale.
Strong knowledge of data engineering principles, including data pipelines, feature engineering, and ETL processes.
Hands-on experience with cloud platforms (AWS, GCP, Azure) and big data technologies (Spark, Hadoop, Snowflake).
Strong background in statistics, mathematics, and probability theory.
Experience with MLOps, model deployment, and monitoring in production environments.
Excellent problem-solving skills, critical thinking, and adaptability in a fast-paced environment.
Strong communication skills, with the ability to present complex AI concepts to non-technical stakeholders.
Preferred Qualifications:
Bachelors or Masters in Computer Science.
Experience with real-time AI applications.
Knowledge of AI ethics, explainability, and responsible AI frameworks.
Why Join Us?
Cutting-edge AI/ML projects: Work on innovative solutions with real-world impact across industries.
Collaborative and high-growth environment: Work alongside top talent in AI, data science, and engineering.
Competitive compensation & benefits: Market-leading salary, stock options, and career growth opportunities.