Data Scientist Principal
Data Scientist Job 15 miles from Grandview
divdivdivdivdivdivdivdivh1Position Title/h1p style="text-align:inherit"/p/div/div/div/div/div/div/div Data Scientist Principalp style="text-align:inherit"/pp style="text-align:inherit"/pp style="text-align:inherit"/pp style="text-align:inherit"/pBroadmoor Campusp style="text-align:inherit"/pp style="text-align:inherit"/ph2 / Career Interest:/h2As a Data Scientist, you will discover and solve real world problems by analyzing clinical, operational, and financial data, defining new metrics and use cases, designing experiments, and creating models. All of this will be done as a part of the centralized Analytics and Business Intelligence department while collaborating with various stakeholders throughout the health system. This role explores and analyzes information to gain maximum insight that can give the organization a competitive advantage. The data scientist will need to be become a subject matter expert for the domains they support and the underlying data, including metadata and recommending ways to improve data quality. Further, the Data Scientist will be responsible for pioneering new methodologies, defining and improving performance metrics and evaluation, and will generally lead innovation efforts on the Data Science team.br/br/Successful candidates should have strong statistics background, programming skills, and understanding of the industry and how it operates. Critical thinking and problem-solving skills are essential for interpreting data, with good communication skills to deliver the findings. Candidates should be comfortable operating in a dynamic environment with ability to balance multiple priorities and competing deadlines. Lastly, the Data Scientist should have an entrepreneurial spirit, always asking "how can we do this better", to drive outcomes for patients as well as for the Health System as a whole.p style="text-align:inherit"/pp style="text-align:inherit"/pb Responsibilities and Essential Job Functions/bulli Participate in all aspects of the project lifecycle from requirement gathering, hypothesis generation, data extraction and transformation, programming, testing, delivery of recommendations, implementation, and ongoing monitoring. /li
li Perform diagnostic analysis to drill down to the root-cause and isolate confounding information that creates a more complete understanding of current performance and informs ways to improve it. /li
li Develop robust models, using iterative model development process, based on statistical approaches and data mining techniques to provide insights and recommendations based on complex data sets that support and guide evidence-based practices. /li
li Formulate, train, and validate predictive models and algorithms to solve a diverse set of problems by incorporating latest research findings. /li
li Identify the most optimal modeling technique based on available data types and objectives/use cases (supervised, unsupervised, semi-supervised, or reinforcement learning). /li
li Implement efficient automated processes that produce modeling results at scale. /li
li Present stories told by data, that are actionable, in a visually appealing and easy to understand manner, to various levels of operations within the health system, including the executive team. /li
li Review tasks critically and ensure they are appropriately prioritized and sized for incremental delivery. /li
li Anticipate and communicate request and project roadblocks and delays before they require escalation. /lili Must be able to perform the professional, clinical and or technical competencies of the assigned unit or department. /li
li These statements are intended to describe the essential functions of the job and are not intended to be an exhaustive list of all responsibilities. Skills and duties may vary dependent upon your department or unit. Other duties may be assigned as required. /li/ulbr/bRequired Education and Experience/bulli Master's Degree in Computer Science, Mathematics, Statistics, Engineering, Economics, or another computational/quantitative field (or equivalent years in experience)/li
li6 or more years experience using data mining/analytical methods and associated tools such as Python, R, etc. /li
li6 or more years SQL experience in a relational database or an equivalent combination of education and experience
/li
li6 or more years experience working with various statistical methods, such as regression analysis, time series analysis, experimental design, etc. /li
li1 or more years experience developing applications with one or more business intelligence tools such as Power BI, Qlik, SAP Business Objects, Tableau, etc. /li
/ulbr/bPreferred Education and Experience/bulli Doctorate in Statistics, Epidemiology, Health Economics, Biostatistics, Computer Science, Clinical/Biomedical Informatics, or a related computational and quantitative discipline/li
li Experience with analytical documentation tools such as Jupyter Notebook /li
li Experience in healthcare environment /li
li Experience working with Caboodle /li
li Experience with one or more of the following data domains: Activity Based Costing, Hospital Industry Data Institute, Press Ganey, Vizient, Workday, etc. /li
/ulbr/bRequired Licensure and Certification/bulli Epic certification in 4 data model(s). If not certified, certification is required within 24 months from employment /li
/ulbr/bPreferred Licensure and Certification/bulli Project Management Professional (PMP) - Project Management Institute (PMI) Project management, Lean, Six Sigma, Agile certification /li
li MS Certifications: Power BI Data Analyst Associate /li
/ulbr/p style="text-align:inherit"/pp style="text-align:inherit"/ph2Time Type:/h2Full timep style="text-align:inherit"/pp style="text-align:inherit"/ph2Job Requisition ID:/h2R-41016p style="text-align:inherit"/pp style="text-align:inherit"/ph3ispanWe are an equal employment opportunity employer without regard to a person's race, color, religion, sex (including pregnancy, gender identity and sexual orientation), national origin, ancestry, age (40 or older), disability, veteran status or genetic information./span/i/h3p style="text-align:inherit"/pp style="text-align:inherit"/ph2Need help finding the right job?/h2p style="text-align:inherit"/pp style="text-align:left"We can recommend jobs specifically for you! Create a custom Job Alert by selecting criteria that suit your career interests./p/div
Data Scientist-Applications (Onsite)
Data Scientist Job 9 miles from Grandview
We're seeking a highly skilled Data Scientist to modernize Netsmart's product portfolio with AI native applications. As a Lead Data Scientist specializing in AI/ML, you will play a pivotal role in architecting, developing, and deploying state-of-the-art models to power our applications. You will collaborate closely with cross-functional teams including product managers, software engineers, and software architects to unlock machine learning capabilities in all our products. We build to scale domestically, leveraging state-of-the-art tooling, with zero downtime.
Responsibilities:
* Design, develop, and optimize machine learning models for production environments.
* Build scalable AI systems, including data pipelines, model training, and deployment frameworks.
* Work with deep learning frameworks (TensorFlow, PyTorch) and traditional ML libraries (Scikit-learn, XGBoost).
* Collaborate with cross-functional teams to integrate AI solutions into existing products and services.
* Optimize models for performance, scalability, and efficiency in cloud or on-prem environments.
* Stay up to date with the latest advancements in AI/ML and apply best practices to ongoing projects.
* Conduct experiments, analyze performance, and iterate on models to enhance accuracy and efficiency.
* Mentor junior engineers and contribute to the AI/ML technical strategy.
* Design and develop strategy for
* medical language learning through language models
* medical code and group code (per various specifications) using large language model
* contract interpretation
* transforming rules and workflows into decision tree using deep learning ML models
* risk detection/anomaly detection using deep learning ML model
* Work with Engineering team to lay out training data acquisition strategy
* Lay out the process for ML use case evaluation and planning
* Design, develop, and optimize RAG (Retrieval-Augmented Generation) models to facilitate effective information retrieval
* Collaborate with engineers to integrate machine learning models into production systems, ensuring scalability, reliability, and performance
* This role is hybrid and based in Overland Park, KS.
Requirements:
Education & Experience:
* Bachelor's, Computer Science or AI, Data Science, or a related field.
* 5+ years of experience in AI/ML development, with a strong track record of building and deploying models.
Technical Skills:
* Proficiency in Python and ML frameworks (TensorFlow, PyTorch, Scikit-learn).
* Experience with cloud platforms (AWS, GCP, Azure) and MLOps tools (MLflow, Kubeflow, Airflow).
* Strong background in data preprocessing, feature engineering, and model optimization.
* Familiarity with deep learning architectures (CNNs, RNNs, Transformers) and NLP techniques.
* Experience with big data technologies (Spark, Hadoop) is a plus.
* Knowledge of DevOps, CI/CD for AI models, and containerization (Docker, Kubernetes) is a plus.
Soft Skills:
* Strong problem-solving and analytical skills.
* Ability to work in a fast-paced, collaborative environment.
* Excellent communication skills to convey complex AI concepts to
* stakeholders.
Preferred Qualifications:
* Master's Degree
* AWS Cloud Experience
Netsmart is proud to be an equal opportunity workplace and is an affirmative action employer, providing equal employment and advancement opportunities to all individuals. We celebrate diversity and are committed to creating an inclusive environment for all associates. All employment decisions at Netsmart, including but not limited to recruiting, hiring, promotion and transfer, are based on performance, qualifications, abilities, education and experience. Netsmart does not discriminate in employment opportunities or practices based on race, color, religion, sex (including pregnancy), sexual orientation, gender identity or expression, national origin, age, physical or mental disability, past or present military service, or any other status protected by the laws or regulations in the locations where we operate.
Netsmart desires to provide a healthy and safe workplace and, as a government contractor, Netsmart is committed to maintaining a drug-free workplace in accordance with applicable federal law. Pursuant to Netsmart policy, all post-offer candidates are required to successfully complete a pre-employment background check, including a drug screen, which is provided at Netsmart's sole expense. In the event a candidate tests positive for a controlled substance, Netsmart will rescind the offer of employment unless the individual can provide proof of valid prescription to Netsmart's third party screening provider.
If you are located in a state which grants you the right to receive information on salary range, pay scale, description of benefits or other compensation for this position, please use this form to request details which you may be legally entitled.
All applicants for employment must be legally authorized to work in the United States. Netsmart does not provide work visa sponsorship for this position.
Netsmart's Job Applicant Privacy Notice may be found here.
Data Scientist-Applications (Onsite)
Data Scientist Job 9 miles from Grandview
div We're seeking a highly skilled Data Scientist to modernize Netsmart's product portfolio with AI native applications. As a Lead Data Scientist specializing in AI/ML, you will play a pivotal role in architecting, developing, and deploying state-of-the-art models to power our applications.
You will collaborate closely with cross-functional teams including product managers, software engineers, and software architects to unlock machine learning capabilities in all our products.
We build to scale domestically, leveraging state-of-the-art tooling, with zero downtime.
p style="text-align:inherit"/pp style="text-align:inherit"/pdivpb Responsibilities: /b/pulli Design, develop, and optimize machine learning models for production environments.
/lili Build scalable AI systems, including data pipelines, model training, and deployment frameworks.
/lili Work with deep learning frameworks (TensorFlow, PyTorch) and traditional ML libraries (Scikit-learn, XGBoost).
/lili Collaborate with cross-functional teams to integrate AI solutions into existing products and services.
/lili Optimize models for performance, scalability, and efficiency in cloud or on-prem environments.
/lili Stay up to date with the latest advancements in AI/ML and apply best practices to ongoing projects.
/lili Conduct experiments, analyze performance, and iterate on models to enhance accuracy and efficiency.
/lili Mentor junior engineers and contribute to the AI/ML technical strategy.
/lili Design and develop strategy for ullimedical language learning through language models/lilimedical code and group code (per various specifications) using large language model/lilicontract interpretation/lilitransforming rules and workflows into decision tree using deep learning ML models/lilirisk detection/anomaly detection using deep learning ML model/li/ul/lili Work with Engineering team to lay out training data acquisition strategy/lili Lay out the process for ML use case evaluation and planning/lili Design, develop, and optimize RAG (Retrieval-Augmented Generation) models to facilitate effective information retrieval /lili Collaborate with engineers to integrate machine learning models into production systems, ensuring scalability, reliability, and performance/lili This role is hybrid and based in Overland Park, KS.
/li/ulp/ppb Requirements: /b/pp✅ bEducation amp; Experience:/b/pulli Bachelor's, Computer Science or AI, Data Science, or a related field.
/lili5+ years of experience in AI/ML development, with a strong track record of building and deploying models.
/li/ulp/pp✅ bTechnical Skills:/b/pulli Proficiency in Python and ML frameworks (TensorFlow, PyTorch, Scikit-learn).
/lili Experience with cloud platforms (AWS, GCP, Azure) and MLOps tools (MLflow, Kubeflow, Airflow).
/lili Strong background in data preprocessing, feature engineering, and model optimization.
/lili Familiarity with deep learning architectures (CNNs, RNNs, Transformers) and NLP techniques.
/lili Experience with big data technologies (Spark, Hadoop) is a plus.
/lili Knowledge of DevOps, CI/CD for AI models, and containerization (Docker, Kubernetes) is a plus.
/li/ulp/pp✅ bSoft Skills:/b/pulli Strong problem-solving and analytical skills.
/lili Ability to work in a fast-paced, collaborative environment.
/lili Excellent communication skills to convey complex AI concepts to/lilistakeholders.
/li/ulp/pp Preferred Qualifications:/pulli Master's Degree/lili AWS Cloud Experience/li/ulp/p/divp style="text-align:inherit"/pp style="text-align:inherit"/pp style="text-align:left"span Netsmart is proud to be an equal opportunity workplace and is an affirmative action employer, providing equal employment and advancement opportunities to all individuals.
We celebrate diversity and are committed to creating an inclusive environment for all associates.
All employment decisions at Netsmart, including but not limited to recruiting, hiring, promotion and transfer, are based on performance, qualifications, abilities, education and experience.
Netsmart does not discriminate in employment opportunities or practices based on race, color, religion, sex (including pregnancy), sexual orientation, gender identity or expression, national origin, age, physical or mental disability, past or present military service, or any other status protected by the laws or regulations in the locations where we operate.
/span/pp style="text-align:inherit"/pp style="text-align:inherit"/pp style="text-align:left"span Netsmart desires to provide a healthy and safe workplace and, as a government contractor, Netsmart is committed to maintaining a drug-free workplace in accordance with applicable federal law.
Pursuant to Netsmart policy, all post-offer candidates are required to successfully complete a pre-employment background check, including a drug screen, which is provided at Netsmart's sole expense.
In the event a candidate tests positive for a controlled substance, Netsmart will rescind the offer of employment unless the individual can provide proof of valid prescription to Netsmart's third party screening provider.
/span/pp style="text-align:inherit"/pp style="text-align:inherit"/pp style="text-align:left"iIf you are located in a state which grants you the right to receive information on salary range, pay scale, description of benefits or other compensation for this position, please use this a href="**************
office.
com/r/a5KjYkVCaT" target="_blank"form/a to request details which you may be legally entitled.
/i/pp style="text-align:inherit"/pp style="text-align:inherit"/pp style="text-align:left"iAll applicants for employment must be legally authorized to work in the United States.
Netsmart does not provide work visa sponsorship for this position.
/i/pp/pp/ppispan Netsmart's Job Applicant Privacy Notice may be found a href="************************************
azurewebsites.
net/-/media/pdfs/Misc/NTST_Job_Applicant_Privacy%20Notice_FINAL_0524.
pdf" target="_blank"here/a.
/span/i/p/div
Sr Data Scientist
Data Scientist Job 17 miles from Grandview
Our Firm
American Century Investments is a leading global asset manager focused on delivering investment results and building long-term client relationships while supporting research that can improve human health and save lives. Founded in 1958, the firm's 1,400 employees serve financial professionals, institutions, corporations, and individual investors, offering a wide range of investment strategies across a variety of investment disciplines.
We are committed to providing institutional-quality, actively managed solutions with a performance-centered mindset. Our expertise spans global growth equity, global value equity, disciplined equity, multi-asset strategies, global fixed income, alternatives, and ETFs.
Privately controlled and independent, we focus solely on investment management. This empowers us to align our decisions with client expectations and concentrate on their long-term money management needs.
Our culture of winning behaviors exemplifies our dedication to clients every single day. Delivering investment results enables us to distribute over 40% of our dividends-more than $1.8 billion - to the Stowers Institute for Medical Research, a 500-person, non-profit basic biomedical research organization with a controlling interest in American Century Investments. Our dividend payments provide ongoing financial support for the Institute's work of uncovering the causes, treatments, and prevention of life-threatening diseases, like cancer.
For more information, please visit americancentury.com.
The Sr. Data Scientist on the Client Insights and Analysis Team will serve as a primary insights provider for the Wealth Management business channel. The primary responsibility of the Sr. Data Scientist on the Client Insights and Analysis Team is to work with business decision-makers to identify business issues and objectives, explore data mining and model building opportunities, build statistical models, present analytical results and models, and provide actionable insights. The Sr. Data Scientist on the Client Insights and Analysis Team will act as a senior member of the team who will assist his/her manager in exploring analytical opportunities, consulting on analytical work of the team and being a coach for other team members. Strong entrepreneurial spirit, interpersonal skills, consultation skills, client management skills, and analytical skills are key to the success for the Sr. Data Scientist role on the Client Insights and Analysis team.
Major Responsibilities and Accountabilities:
Actively participate in client-based cross-functional teams: represent data science profession on client-based cross-functional teams; identify business issues and resulting research needs; participate in client team project planning; fulfill identified team responsibilities: (10%)
Consult with decision-makers to identify business objectives, data sources and analytical methodology: work independently or with other analysts to identify and define business issues and goals, convert them into appropriate analytical objectives and strategies. Will explore and understand all internal data sources and will have a clear understanding of methodology behind the analysis and model building: (30%)
Provide research design and analysis, engage in data science and model building: based on identified needs, analyze database information or 3rd party data sources in response to data analysis requests. Identify and implement appropriate design and analysis, explore data sources to uncover useful information to help business clients in their decision making. Primarily will use Python and the AWS tech stack (Athena, Sagemaker, Glue, Redshift, etc.) as well as a variety of other data and analysis software. Activities include, but are not limited to: (50%)
Create research design, measurement for clients in business execution
Provide analysis reports and give proper interpretation to clients
Build models to meet clients' objective using available data sources
Report analytical results and recommendations to business clients
Provide support for Client Insights and Analysis team; consult on data science and model building issues
Help team development, be a coach for other analysts
Act as single point of contact for business partner and address their needs with minimal guidance from manager
Responsible for training and maintaining scoring mechanisms that drive our Next Best Action/Lead Management programs including Client Engagement, Interest and Lifetime Value Scoring
Act as data steward for critical Distribution data sets such as those provided by key firm partners
Administrative: participate in professional development: (10%)
Required Skills:
Ability to work independently and as a member of a team.
Ability to analyze data and translate results into actionable insights. Excellent problem solving skills essential.
Ability to communicate clearly and concisely, both verbally and in written form.
Ability to prioritize work and manage multiple requests concurrently.
Highly motivated self-starter with the ability to determine priorities, to plan, organize and follow through on assignments. Accuracy and attention to detail are essential.
Must be customer focused.
Ability to learn technical software applications and statistical techniques.
Professional attitude and appearance.
Ability to translate business issues into data analysis objectives.
Understands latest techniques in statistical data analysis.
Required Experience:
Master's or PhD degree in Statistics, Mathematics, Computer Science, or other analytical related field required.
Excellent problem solving, analytical and quantitative analysis skills.
Experience with Python or other statistical software language/application.
Experience with AWS and its Data Science stack (Sagemaker, Glue, Athena, etc.)
Experience with SQL in data warehouse environment.
Working knowledge of Microsoft Office applications.
Minimum 5 years data analysis experience.
Excellent interpersonal, verbal and written communication skills.
Experience working with senior management in presenting reports/making recommendations.
Models the American Century Investments Winning Behaviors: Client Focused, Courageous and Accountable, Collaborative, Curious and Adaptable, Competitively Driven, Adheres to the highest ethical standards and business practices, and Supports a culture of compliance
Preferred:
Knowledge of Financial Services industry or Direct to Consumer, ideally of the Asset Management space.
5+ years of experience building and implementing statistical models.
2+ years of experience managing projects, programs or vendors.
For New York based candidates, the salary range for this role is $139,000.00 - $167,000.00. All offers are based on various factors including but not limited to a candidate's location, skills, experience, and relevant education and/or training. This position is eligible for cash incentive providing the potential to earn more.
Additional Requirements:
Employees are required to be in the office on a scheduled frequency. Adherence to this schedule is essential to fulfilling the expectations of the role.
American Century Investments is committed to complying with the Americans with Disabilities Act and all other applicable Equal Employment Opportunity laws and regulations. As such, American Century strives to provide a reasonable accommodation to any qualified individual under the ADA to perform essential job functions.
American Century Investments believes all individuals are entitled to equal employment opportunity and advancement opportunities without regard to race, religious creed, color, sex, national origin, ancestry, physical disability, mental disability, medical condition, genetic information, marital status, gender, gender identity, gender expression, age for individuals forty years of age and older, military and veteran status, sexual orientation, and any other basis protected by applicable federal, state and local laws. ACI does not discriminate or adopt any policy that discriminates against an individual or any group of individuals on any of these bases.
#LI-Hybrid
©2019 American Century Proprietary Holdings, Inc. All rights reserved.
Sr. Data Scientist (Operations Research)
Data Scientist Job 17 miles from Grandview
EquipmentShare is Hiring a Sr. Data Scientist (Operations Research)
EquipmentShare is searching for a Sr. Data Scientist specializing in Operations Research (OR) to join our team. This position is fully remote.
Primary Responsibilities
Despite having been fundamentally altered by earlier industrial revolutions, the construction industry has hardly budged with the computer revolution. In fact, since 1970, labor productivity in the US construction industry has actually declined, despite it more than doubling in the rest of the economy. This has contributed to housing shortages and the parlous state of infrastructure in some places, and is sanding the gears of carbon reduction efforts.
We think the industry is ripe for change, and we're pushing the leading edge of that change with our next generation T3 Platform, the OS for Construction. Through T3, we help contractors to coordinate humans and (increasingly smarter) machines to build more effectively.
As a Sr. Data Scientist specialized in OR in our small and quickly growing team, you will play a major role in this effort. In particular, you will
Create and enhance fleet management practices across the company through analytical techniques
Develop, from scratch, simulation experiments that lead to implemented optimization algorithms to solve our complex supply chain problems
Assist in identifying key KPIs and metrics to measure our company's supply chain effectiveness
Help to identify the highest value next opportunities for OR within a big greenfield space, work cross-functionally to plan and build, and measure your significant business impact via experimentation
Why We're a Better Place to Work
Competitive compensation packages
401 (k) and company match
Health insurance and medical coverage benefits
Unlimited paid time off
Generous paid parental leave
Volunteering and local charity initiatives that help you nurture and grow the communities you call home
Stocked breakroom and full kitchen (corporate HQ)
State of the art onsite gym (corporate HQ)/Gym stipend for remote employees
Opportunities for career and professional development with conferences, events, seminars, continued education
About You
Our mission to change an entire industry is not easily achieved, so we only hire people who are inspired by the goal and up for the challenge. In turn, our employees have every opportunity to grow with us, achieve personal and professional success and enjoy making a tangible difference in an industry that's long been resistant to change.
Skills & Qualifications
Minimum Qualifications:
Graduate degree or equivalent practical experience in statistics, computer science, applied math, operations research or related field
4+ years working on technology-powered products and projects within the OR, supply chain optimization, or data science roles
Demonstrated understanding of the techniques and methods of modern algorithm development
Strong cross-functional communication skills
Must be qualified to work in the United States - we are not sponsoring any candidates at this time
EquipmentShare is committed to a diverse and inclusive workplace. EquipmentShare is an equal opportunity employer and does not discriminate on the basis of race, national origin, gender, gender identity, sexual orientation, protected veteran status, disability, age, or other legally protected status.
#LI-Remote
Actuary
Data Scientist Job 17 miles from Grandview
Summary / Objective Serve as the technical expert and project manager for the annual funding, accounting, and compliance requirements for defined benefit pension plans. Responsibilities * Leverage tools and resources to provide clients with innovative and effective solutions
* Assist in the development and delivery of quality client communication and deliverables in support of Senior Consultants
* Prepare and manage internal project deadlines
* Prepare, technically review, and serve as primary contact to clients on annual service offerings
* Review pension benefit calculations in accordance with plan provisions, Internal Revenue Code, ERISA, and other legal regulations
* Oversee annual employee pension statement projects
* Assist in completing and managing non-routine client work such as plan design, plan termination, benefit statements, and other special projects
* Assist in the development and training of student-level staff
Skills & Qualifications
* Bachelor degree in math, actuarial science, or other related field
* One or more credentials from the Society of Actuaries or Joint Board of Enrolled actuaries required
* If not fully credentialed, pursuit of FSA is required
* ProVal experience a strong plus
* Project management experience
* Strong communication skills, including client consulting and presentations
Be aware of employment fraud. All email communications from Ascensus or its hiring managers originate ****************** ****************** email addresses. We will never ask you for payment or require you to purchase any equipment. If you are suspicious or unsure about validity of a job posting, we strongly encourage you to apply directly through our website.
For all virtual remote positions, in order to ensure associates can effectively perform their job duties with no distractions, we require an uninterrupted virtual work space and there is also an expectation of family care being in place during business hours. Additionally, there is an internet work speed requirement of 25 MBps or better for individual use. If more than one person is utilizing the same internet connection in the same household or building, then a stronger connection is required. If you are unsure of your internet speed, please check with your service provider. Note: For call center roles specifically, it is a requirement to either hardwire your equipment directly to the internet router or ensure your workstation is in close proximity to the router. Please ensure that you are able to meet these expectations before applying.
At Ascensus, we aspire to make a difference for others. We are a technology-enabled services company that helps people save for a better future through our network of institutional, financial advisor, and state partners. Our culture is guided by sound principles, is committed to high standards, operates with transparency, and welcomes diversity-housed within our Core Values: People Matter. Quality First. Integrity Always.
Ascensus provides equal employment opportunities to all associates and applicants for employment without regard to ancestry, race, color, religion, sex, (including pregnancy, childbirth, breastfeeding and/or related medical conditions), gender, gender identity, gender expression, national origin, age, physical or mental disability, medical condition (including cancer and genetic characteristics), marital status, military or veteran status, genetic information, sexual orientation, criminal conviction record or any other protected category in accordance with applicable federal, state, or local laws ("Protected Status").
Data Engineer (Snowflake & DBT)
Data Scientist Job 9 miles from Grandview
We are seeking candidates for a full-time data engineer position for our Overland Park, KS headquarters. We are open to candidates that are either hybrid or remote. The candidate will join a team developing data pipelines, building curated enterprise data products, and constructing system integrations for Mariner's ecosystem of platforms. This position requires excellent communication skills to collaborate across multiple business units, supporting data-driven decision-making as part of our collaborative team, you will integrate a diverse ecosystem of CRMs, portfolio accounting, trading, and ERP systems, crafting a seamless and interconnected suite of tools for Mariner. We are searching for positive, data-obsessed engineers with a passion for the industry and a capacity to leverage a variety of technical disciplines to support Mariner's growing portfolio of businesses.
Responsibilities
Developing enterprise data platforms, data extraction, and mastering pipelines leveraging modern ETL/ELT practices
Systems integrations using Python, APIs, orchestration tools, and scripting languages
Building and maintaining enterprise datasets using Snowflake, SQL, and DBT
Architecture and asset deployments using IaC and CI/CD tools
Developing and maintaining automation configurations and scheduling
Building tools and processes for data validation, testing, alerting, and monitoring
Requirements
Bachelor's degree in Computer Science, Engineering, or a related field
3+ years of professional experience in software development or data engineering preferred
Significant experience with SQL and Python
Skills & Knowledge
Relational databases and cloud data warehouses (SQL, Snowflake, Databricks, Redshift)
Data mastering and pipeline development (DBT, Python/Pandas, Spark, Streams/Tasks)
Building back-end services and processes in Python, Java, C#, or similar
Common data formats and data exchanges (JSON, CSV, JDBC, FTP, S3)
Experience with scripting languages (PowerShell, Bash/Sh, etc)
Cloud environments (AWS, Azure) and tools (lambda, ECS, S3)
DevOps and CI/CD practices (Docker, GitHub Actions) and IaC (Terraform)
Automation platforms (ActiveBatch, Airflow, Dagster, Windmill)
Enterprise systems such as Salesforce, ERPs, portfolio management platforms, order-management systems, and reporting tools (Tableau/Streamlit)
Prior experience in the financial services, investments, trading, or related industries
We welcome your interest in being a part of our firm. We believe in giving associates progressive opportunities, actively nurturing professional growth and giving back to the community. We are dedicated to building a diverse culture where everyone has the support they need to achieve their career goals. We offer an innovative workplace and a culture that fosters camaraderie, teamwork and work-life balance.
Our compensation reflects the cost of talent across multiple US geographic markets. The base pay for this position across all US geographic markets ranges from $80,000.00/year to $120,000.00/year. Pay is based on a number of factors including geographic location and may vary depending on job-related knowledge, skills, and experience. Eligibility to participate in an incentive program is subject to the rules governing the program, whereby an award, if any, depends on various factors including, without limitation, individual and organizational performance. Roles may also be eligible for additional compensation and/or benefits.
#LI-SA1
EOE M/F/D/V
Internship - Data & Analytics
Data Scientist Job 17 miles from Grandview
Who We Are: At VML, we are a beacon of innovation and growth in an ever-evolving world. Our heritage is built upon a century of combined expertise, where creativity meets technology, and diverse perspectives ignite inspiration. With the merger of VMLY&R and Wunderman Thompson, we have forged a new path as a growth partner that is part creative agency, part consultancy, and part technology powerhouse.
Our global family now encompasses over 30,000 employees across 150+ offices in 64 markets, each contributing to a culture that values connection, belonging, and the power of differences. Our expertise spans the entire customer journey, offering deep insights in communications, commerce, consultancy, CRM, CX, data, production, and technology. We deliver end-to-end solutions that result in revolutionary work.
Are you looking for an opportunity to be part of a team that values innovation and growth? Look no further than VML. We are a creative agency, consultancy, and technology powerhouse all in one, with a century of combined expertise under our belt. With over 30,000 employees across 150+ offices in 64 markets, we're a global family that values connection, belonging, and the power of differences.
As an intern at VML, you'll have the chance to contribute to a culture that is always pushing boundaries and delivering revolutionary work. Our expertise spans the entire customer journey, from communications to commerce, CRM to CX, data to production, and technology.
Join us and be part of a team that ignites inspiration and values diverse perspectives. As an intern, you'll have the opportunity to learn from industry experts and gain hands-on experience working on projects that make a real impact.
We can't wait to see what you'll bring to the table. Apply now and let's create something amazing together!
Who we are looking for:
The Innovation & Data Intern will work closely with the Innovation and Data/Analytics team to support data-driven decision making, identify opportunities for innovation, and contribute to the development of new products and services. The successful candidate will be responsible for conducting research, analyzing data, and assisting with the implementation of new initiatives.
What you'll do:
* Conduct research on industry trends, competitor activity, and emerging technologies
* Analyze data and provide insights to support business decisions
* Assist with the development of new products and services
* Collaborate with cross-functional teams to implement new initiatives
* Participate in brainstorming sessions to generate new ideas
* Prepare reports and presentations to communicate findings and recommendations
* Support the Innovation and Data/Analytics team with ad-hoc tasks as required
Who you are:
* Passion and desire to pursue a career in marketing, advertising, communications, or a related field
* Strong analytical skills with experience in data analysis and visualization tools such as Excel, Tableau, or Power BI
* Excellent written and verbal communication skills
* Ability to work independently and collaboratively in a fast-paced environment
* Creative problem solver with a passion for innovation
* Familiarity with Agile methodologies is a plus
What you'll need:
* Possess strong organizational skills, attention to detail
* Ability to manage multiple projects simultaneously
* Basic understanding of marketing principles and be comfortable working with large datasets
This will be a hybrid opportunity at the office of this posting. The dates for this program are June 3rd-August 7th. For more information please visit: **************************************************
The compensation for this position at the time of this posting is indicated below. Individual compensation varies based on job-related factors, including location, business needs, level of responsibility, experience, and qualifications. We offer a competitive benefits package. Click WPP Benefits for more details.
_
$20-$20 USD
At VML, we are committed to fostering an all-inclusive work environment that is both rewarding and career-forward. Our Inclusion, Equity & Belonging initiatives, alongside the VML Foundation, reflect our dedication to giving back and making a positive impact in our communities and beyond. Our people are the heartbeat of our organization-creators, doers, innovators, makers, and thinkers-who drive not just marketing, but meaningful experiences that resonate in every action and interaction.
VML is a WPP Agency. For more information, please visit our website, and follow VML on our social channels via Instagram, LinkedIn, and X.
When you click "Submit Application", this will send any information you add below to VML. Before you do this, we think it's a good idea to read through our Recruitment Privacy Policy. California residents should read our California Recruitment Privacy Notice. This explains what we do with your personal data when you apply for a role with us, and, how you can update the information you have provided us with or how to remove it.
Data Engineer (Snowflake & DBT)
Data Scientist Job 9 miles from Grandview
We are seeking candidates for a full-time data engineer position for our Overland Park, KS headquarters. We are open to candidates that are either hybrid or remote. The candidate will join a team developing data pipelines, building curated enterprise data products, and constructing system integrations for Mariner's ecosystem of platforms. This position requires excellent communication skills to collaborate across multiple business units, supporting data-driven decision-making as part of our collaborative team, you will integrate a diverse ecosystem of CRMs, portfolio accounting, trading, and ERP systems, crafting a seamless and interconnected suite of tools for Mariner. We are searching for positive, data-obsessed engineers with a passion for the industry and a capacity to leverage a variety of technical disciplines to support Mariner's growing portfolio of businesses.
Responsibilities
Developing enterprise data platforms, data extraction, and mastering pipelines leveraging modern ETL/ELT practices
Systems integrations using Python, APIs, orchestration tools, and scripting languages
Building and maintaining enterprise datasets using Snowflake, SQL, and DBT
Architecture and asset deployments using IaC and CI/CD tools
Developing and maintaining automation configurations and scheduling
Building tools and processes for data validation, testing, alerting, and monitoring
Requirements
Bachelor's degree in Computer Science, Engineering, or a related field
3+ years of professional experience in software development or data engineering preferred
Significant experience with SQL and Python
Skills & Knowledge
Relational databases and cloud data warehouses (SQL, Snowflake, Databricks, Redshift)
Data mastering and pipeline development (DBT, Python/Pandas, Spark, Streams/Tasks)
Building back-end services and processes in Python, Java, C#, or similar
Common data formats and data exchanges (JSON, CSV, JDBC, FTP, S3)
Experience with scripting languages (PowerShell, Bash/Sh, etc)
Cloud environments (AWS, Azure) and tools (lambda, ECS, S3)
DevOps and CI/CD practices (Docker, GitHub Actions) and IaC (Terraform)
Automation platforms (ActiveBatch, Airflow, Dagster, Windmill)
Enterprise systems such as Salesforce, ERPs, portfolio management platforms, order-management systems, and reporting tools (Tableau/Streamlit)
Prior experience in the financial services, investments, trading, or related industries
We welcome your interest in being a part of our firm. We believe in giving associates progressive opportunities, actively nurturing professional growth and giving back to the community. We are dedicated to building a diverse culture where everyone has the support they need to achieve their career goals. We offer an innovative workplace and a culture that fosters camaraderie, teamwork and work-life balance.
Our compensation reflects the cost of talent across multiple US geographic markets. The base pay for this position across all US geographic markets ranges from $80,000.00/year to $120,000.00/year. Pay is based on a number of factors including geographic location and may vary depending on job-related knowledge, skills, and experience. Eligibility to participate in an incentive program is subject to the rules governing the program, whereby an award, if any, depends on various factors including, without limitation, individual and organizational performance. Roles may also be eligible for additional compensation and/or benefits.
#LI-SA1
EOE M/F/D/V
Data Engineer
Data Scientist Job 20 miles from Grandview
Welcome to MFour!
Our Story: Founded in 2011, MFour is a fast-growing data and analytics technology company “democratizing” consumer insights. Named a 2023, “top 25 L.A. tech companies to watch,” by Built In, we've created a SaaS ecosystem, giving businesses access to united shopper behavior + opinion data that uncovers the whos, whats, whys, whens and wheres behind consumers in ways never before possible. We help 25% of the Fortune 100 plus organizations of all sizes make product, brand and advertising decisions.
Our Product: Using the nation's most downloaded, highest-rated, and only Apple-approved app & web data collection and survey app (Surveys On The Go ), MFour is the only consumer market research company that guarantees real, validated responses for every survey, giving businesses the data and insights they need to make impactful decisions.
From validated opinion and behavior trackers to advertising exposure measurement MFour is trusted by hundreds of leading.
Our Goal: To eliminate the unknown (data blind spots) within market research in a way that recognizes a consumer's right to privacy - so marketers can make better decisions about new and existing products, advertising choices and competitive positioning. MFour is trusted by hundreds of leading brands, including Google, Microsoft, Samsung, Walmart, Disney, Spotify, and Lowe's.
Our Core Values: Simplicity, Humility, Quality, Consistency, and Innovation.
Your Journey to Success:
The Data Engineer will design, implement, and manage scalable data pipelines, databases, and processing systems to support engineering, product, and analytics teams. This includes developing and administering platforms like Databricks and Snowflake, optimizing data architecture, and ensuring performance, scalability, and reliability.
Your Mission:
Data Infrastructure Development:
Design, build, and maintain scalable data pipelines to process large volumes of structured and unstructured data.
Optimize data architecture for both real-time and batch processing workflows.
Integrate and maintain PostgreSQL, MySQL, Databricks, and Snowflake within our data ecosystem.
Databricks Development and Administration:
Develop and manage workflows on Databricks to support ETL/ELT processes.
Optimize Databricks performance for distributed computing and data processing.
Monitor, troubleshoot, and ensure the reliability of Databricks clusters and jobs.
Snowflake Development and Administration:
Design and optimize Snowflake schemas and data warehouses for efficient querying.
Develop and manage data ingestion pipelines for Snowflake using modern tools and frameworks.
Administer Snowflake environments, including account management, performance tuning, and resource optimization.
Database Management:
Administer PostgreSQL and MySQL databases, ensuring high availability and performance.
Implement and enforce data governance practices to ensure data security and compliance.
Collaboration:
Partner with analytics and engineering teams to define data needs and deliver solutions for business insights.
Work with product teams to align data solutions with product goals and user requirements.
Monitoring and Troubleshooting:
Set up monitoring and alerting systems for data platforms and pipelines.
Proactively debug and resolve data-related issues to ensure system reliability.
Innovation and Best Practices:
Stay current with emerging trends and advocate for tools and practices that enhance data engineering efficiency.
Document workflows, pipelines, and platform configurations to foster team collaboration and knowledge sharing.
What Sets You Apart
Bachelor's degree in Computer Science, Engineering, or a related field (or equivalent experience).
3+ years of professional experience in data engineering or related fields.
Proficiency in Databricks development and cluster administration.
Hands-on experience with Snowflake development and administration, including performance tuning and security configuration.
Strong understanding of relational databases, particularly PostgreSQL and MySQL.
Expertise in programming languages like Python, Scala, or SQL for data processing and automation.
Familiarity with ETL/ELT tools and frameworks such as Apache Airflow or dbt.
Experience with cloud platforms (AWS, Azure, or GCP).
Preferred Skills:
Knowledge of distributed systems and big data frameworks like Spark.
Experience with CI/CD pipelines for deploying data workflows.
Familiarity with real-time streaming technologies (Kafka, Kinesis, etc.).
Why Join Us?
Work with a passionate, innovative team driving the next wave of industry solutions.
Opportunities for professional growth and continuous learning.
Competitive compensation, comprehensive benefits, and a flexible work environment.
In return for your dedication, we offer:
Salary
$125,000
Health benefits
Top-tier health benefits include; medical, dental, vision, LTD, and life insurance
Mental health benefits
Work-life balance
Unlimited PTO
Flexible hybrid schedule
Additional benefits
In the WeWork office In the heart of the financial district in downtown Kansas City
Open space concept & team-focused atmosphere
Company-wide celebrations
Team bonding events and happy hours
CHECK US OUT: ************************************* OxGias&t=1s
OUR APP : ***************************
Data Engineer
Data Scientist Job 9 miles from Grandview
Description Company Overview Shamrock Trading Corporation is the parent company for a family of brands in transportation services, finance and technology. Headquartered in Overland Park, KS, Shamrock is frequently recognized among the “Best Places to Work” in Kansas City and Chicago and was most recently recognized as one of America's top 100 “Most Loved Workplaces” by Newsweek. We also have offices in Atlanta, Chicago, Dallas, Ft. Lauderdale, Houston, Laredo, Nashville, Philadelphia and Phoenix. With an average annual revenue growth of 25% over several decades, Shamrock's success is attributed to three key factors: hiring the best people, cultivating long-term relationships with our customers and continually evolving in the marketplace. Responsibilities Shamrock Trading Corporation is looking for a Data Engineer who wants to utilize their expertise in data warehousing, data pipeline creation/support and analytical reporting skills by joining our Data Services team. This role is responsible for gathering and analyzing data from several internal and external sources, designing a cloud-focused data platform for analytics and business intelligence, reliably providing data to our analysts. This role requires significant understanding of data mining and analytical techniques. An ideal candidate will have strong technical capabilities, business acumen, and the ability to work effectively with cross-functional teams. Responsibilities include but are not limited to:
Develop & Maintain Scalable Data Pipelines: Build, optimize, and maintain ETL/ELT pipelines using Databricks, Apache Spark, and Delta Lake.
Optimize Data Processing: Implement performance tuning techniques to improve Spark-based workloads.
Cloud Data Engineering: Work with AWS services (S3, Lambda, Glue, Redshift, etc.) to design and implement robust data architectures.
Real-time & Streaming Data: Develop streaming solutions using Kafka and Databricks Structured Streaming.
Data Quality & Governance: Implement data validation, observability, and governance best practices using Unity Catalog or other tools.
Cross-functional Collaboration: Partner with analysts, data scientists, and application engineers to ensure data meets business needs.
Automation & CI/CD: Implement infrastructure-as-code (IaC) and CI/CD best practices for data pipelines using tools like Terraform, dbt, and GitHub Actions.
Qualifications
Bachelor's degree in computer science, data science or related technical field, or equivalent practical experience
2-5+ years of experience in data engineering, with a focus on cloud-based platforms.
Strong hands-on experience with Databricks (including Spark, Delta Lake, and MLflow).
Experience building and maintaining AWS based data pipelines: currently utilizing AWS Lambda, Docker / ECS, MSK, Airflow, Databricks, Unity Catalog
Development experience utilizing two or more of the following:
Python: (Pandas/Numpy, Boto3, SimpleSalesforce)
Databricks (py Spark, py SQL, DLT)
Apache Spark
Kafka and the Kafka Connect ecosystem (schema registry and Avro)
Familiarity with CI/CD for data pipelines and infrastructure as code (Terraform, dbt)
Strong SQL skills for data transformation and performance tuning.
Understanding of data security and governance best practices.
Enthusiasm for working directly with customer teams (Business units and internal IT)
Preferred Qualifications
Proven experience with relational and NoSQL databases (e.g. Postgres, Redshift, MongoDB)
Experience with version control (git) and peer code reviews
Familiarity with data lakehouse architectures and optimization strategies.
Familiarity with data visualization techniques using tools such as Grafana, PowerBI, AWS Quick Sight, and Excel.
Benefits Package At Shamrock we hire bright, ambitious people and give them the tools they need to be successful. By investing in training and development, we hope to become a long-term career for employees, where there are always opportunities for advancement. Shamrock also offers a premier set of benefits for employees and their families:
Medical: Fully paid healthcare, dental and vision premiums for employees and eligible dependents
Work-Life Balance: Competitive PTO and paid leave policies
Financial: Generous company 401(k) contributions and employee stock ownership after one year
Wellness: Onsite gym and discounted membership to select fitness centers. Jogging trails available at Overland Park offices
#LI-NB1 #LI-Remote
Data Engineer
Data Scientist Job 17 miles from Grandview
Full-time Description
We are a leading healthcare technology company dedicated to transforming the way healthcare organizations leverage data to improve patient care and operational efficiency. As a Healthcare Data Engineer, you will play a vital role in developing and executing data transformation activities, enabling seamless access and analysis of healthcare data for our clients.
Job Summary:
We are seeking a skilled and motivated Healthcare Data Engineer to join our dynamic team. In this role, you will be responsible for developing robust SQL, Python, and Scala based tools and executing data transformation activities in a streamlined manner. Your expertise will be crucial in ensuring the efficient and accurate extraction, transformation, and loading (ETL) of healthcare data, enabling our clients to make data-driven decisions and drive meaningful outcomes.
Responsibilities:
SQL Development: Design, develop, and optimize SQL-based tools, scripts, and queries for data transformation, integration, and extraction purposes. Leverage your deep understanding of SQL and relational databases to create efficient and scalable solutions.
Data Transformation: Execute end-to-end data transformation activities, including data extraction, cleansing, normalization, and aggregation utilizing standard data engineering tools. Develop and implement data mapping and transformation logic to ensure data integrity and consistency.
ETL Process Streamlining: Continuously improve and streamline our data transformation processes, ensuring optimal performance, scalability, and reliability. Identify and implement opportunities for automation and process optimization to enhance efficiency.
Data Quality Assurance: Perform thorough data validation and quality checks to ensure accuracy, completeness, and integrity of transformed data. Collaborate with data analysts and domain experts to resolve data-related issues and discrepancies.
Performance Optimization: Identify and address performance bottlenecks in SQL queries and processes. Optimize database structures, indexes, and query execution plans to improve overall system performance.
Documentation: Maintain clear and comprehensive documentation of SQL code, data transformation processes, and system configurations. Ensure knowledge transfer and effective collaboration with other team members.
Collaboration: Work closely with cross-functional teams, including data analysts, software engineers, and healthcare domain experts, to understand data requirements and translate them into technical solutions. Collaborate on data-related projects, contributing your expertise and insights.
Data Security and Compliance: Adhere to strict data security and privacy standards, including compliance with relevant regulations (e.g., HIPAA). Implement appropriate security measures to protect sensitive healthcare data.
Troubleshooting and Support: Investigate and troubleshoot data-related issues reported by clients or internal stakeholders. Provide timely support and resolution, ensuring minimal disruption to data processes.
Requirements
Qualifications:
Bachelor's degree in computer science, information systems, or a related field. Relevant certifications and additional education are a plus.
Strong experience (3+ years) as a Data Engineer or similar role, with a focus on data transformation and ETL development.
Solid understanding of data transformation concepts, data modeling, and database design principles.
Expertise in SQL programming, including complex query development, stored procedures, and database performance optimization with proficiency in working with relational databases (e.g., Oracle, SQL Server, MySQL) and related tools.
Strong experience (3+ years) with object-oriented languages and scripting tools including SQL, Python, Scala, and Spark
Strong Experience (3+ years) with distributed data/computing tools (Hadoop, Spark) and cloud-based data services (AWS)
Familiarity with healthcare data standards (e.g., HL7, FHIR) and healthcare-related regulatory requirements (e.g., HIPAA) is highly desirable.
Strong analytical and problem-solving skills, with the ability to identify data-related issues and implement effective solutions.
Detail-oriented mindset with a focus on data accuracy, quality, and integrity.
Excellent communication and collaboration skills to effectively work with cross-functional teams and stakeholders.
Ability to work in a fast-paced, dynamic environment and manage multiple priorities.
Proven ability to work independently and with a team.
Salary Description $80,000 - $120,000
Data Engineer
Data Scientist Job 8 miles from Grandview
Job Details Vitori Kansas Office - Leawood, KS Full TimeDescription
The Data Engineer is responsible for designing, developing, and maintaining scalable data pipelines and infrastructure to support data-driven decision-making across Vitori Health. This role requires expertise in ETL/ELT processes, data modeling, and cloud-based data warehousing solutions. The ideal candidate will have strong SQL skills and experience working with AWS and GCP environments. Familiarity with Kubernetes and GitHub is also beneficial. TPA (Third-Party Administrator) experience is a plus.
ESSENTIAL DUTIES AND RESPONSIBILITIES: Data Pipeline Development:
Design, build, and maintain efficient and scalable ETL/ELT pipelines to process structured and unstructured data.
Optimize data workflows for performance, reliability, and cost-efficiency.
Develop data models and schemas to support analytics and reporting needs.
Data Warehousing & Management:
Manage and enhance data warehouse solutions using cloud platforms such as AWS Redshift, Google BigQuery, or Snowflake.
Implement data quality and validation processes to ensure accuracy and consistency.
Monitor and troubleshoot data pipelines and warehouse performance.
Cloud & Infrastructure:
Leverage cloud-native solutions for data storage, transformation, and orchestration (AWS Glue, Lambda, GCP Dataflow, etc.).
Manage access controls, security policies, and compliance measures for data infrastructure.
Evaluate and recommend new tools and technologies to improve data architecture.
Collaboration & Stakeholder Engagement:
Work closely with analysts, data scientists, and business teams to understand data requirements.
Translate business needs into scalable technical solutions.
Support reporting and analytics by providing clean, well-structured data assets.
Continuous Improvement:
Stay updated with industry trends, best practices, and emerging technologies in data engineering.
Automate repetitive tasks and optimize data processes for efficiency.
Contribute to the development of data governance policies and documentation.
Qualifications QUALIFICATIONS:
The Data Engineer must be a self-starter with strong problem-solving skills, attention to detail, and the ability to work independently while also collaborating with cross-functional teams. The individual should be highly analytical, detail-oriented, and capable of managing multiple projects in a dynamic and fast- paced environment. Excellent verbal and written communication skills are required, along with strong interpersonal and problem-solving abilities. The Data Engineer should have a proactive approach to identifying and resolving data challenges, ensuring optimal performance of data pipelines and warehouse solutions. Additionally, the role demands adaptability to evolving technologies and business requirements, as well as a commitment to continuous learning and professional development.
EDUCATION AND/OR EXPERIENCE:
A bachelor's degree in Computer Science, Information Systems, Data Engineering, or a related field is required. Candidates should have a minimum of five years of experience in data engineering, ETL/ELT development, and data warehousing. Strong proficiency in SQL and experience with relational and non- relational databases is essential. Experience with cloud data platforms such as AWS Redshift, Google BigQuery, or Snowflake is highly desirable, along with familiarity in programming languages like Python or Java for data processing. A solid understanding of data governance, security, and compliance best practices is needed, and TPA experience is considered a plus.
CERTIFICATIONS/LICENSURE/REGISTRATIONS:
AWS Certified Data Engineer - Associate (Preferred)
Google Cloud Professional Data Engineer (Preferred)
Other relevant certifications in data engineering and cloud technologies are a plus.
Data Engineer
Data Scientist Job 8 miles from Grandview
Become Part of the Torch.AI Journey!
Torch.AI is a defense focused AI software company. Unlike traditional government contractors, our team of experts takes calculated risks to self-fund R&D behind the scenes and then sells complete products "off-the-shelf" to mission owners. We conduct deep research into understanding how AI and new data infrastructures can improve a growing array of national defense needs. This allows us to go from ideation to full capability deployment in weeks and months, instead of years. We're passionate about solving complex problems.
Torch.AI's focus on the U.S. defense and national security industry offers you an unparalleled opportunity to contribute directly to the safety and well-being of our nation while building new innovative technologies. As a vital partner to the U.S. government and our allies, helping to shape global stability, we offer a dynamic environment to tackle complex challenges across multidisciplinary domains. With substantial investment in innovation, Torch.AI is at the forefront of developing AI, autonomous systems, and advanced national security solutions, founded on the premise that information is the new battlefield. Join us in our mission to help the most important organizations in the world Unlock Human Potential.
The Role: Unlock Your Potential
As a Data Engineer at Torch.AI, you will be at the forefront of building scalable data solutions within and across Torch.AI's platform capabilities. You will design and implement robust data pipelines, systems, and processes to ingest, transform, and store data efficiently, enabling critical insights and operational effectiveness across U.S. defense and national security programs and projects.
Each of our customers requires unique technical solutions to enable an asymmetric advantage on the battlefield. Torch.AI's award winning, patented software helps remove common obstacles such as manual-intensive data processing, parsing, and analysis thereby reducing the cognitive burden of the warfighter. Our modular, end-to-end data data processing, orchestration and management platform supports a wide variety of military capabilities and operations. Customers enjoy enterprise-grade solutions that meet specialized needs. Torch.AI encourages company-wide collaboration to share context, skills, and expertise across a variety of tools, technologies, and development practices.
You'll work autonomously while driving coordinated, collaborative decisions across cross-functional product development teams of defense and national security experts, veterans, experienced AI/ML software engineers. Your code will integrate elegant user experiences with unmatched back-end data processing capabilities by implementing responsive designs, optimizing application performance, and integrating with complex backend services and APIs. You will have the opportunity to harden and scale existing platform capabilities, tools, and technologies, while also working to innovate and introduce new iterative capabilities and features which benefit our company and customers.
Successful candidates thrive in a fast-paced, entrepreneurial, and mission-driven environment. We hire brilliant patriots. You'll be encouraged to think creatively, challenge conventional approaches, and identify alternative approaches to delivering customers value across complex problem sets. The day-to-day workflow will vary, adapting to the requirements of our customers and the technical needs of respective use cases. One day, you may be supporting the development of a new proof of capability concept for a new customer program; another you may be refining system performance to help scale a production deployment; the next you may be working directly with customers to understand their requirements with deep intellectual curiosity.
What Sets This Role Apart
Our decentralized operating model puts every employee at the forefront of our customers' missions. You'll work within and across both nimble customer centric solutions teams and research and development teams.
We value customer intimacy, unique perspectives, and dedication to delivering lasting impact and results. You'll have the opportunity to work on the frontlines of major customer programs and influence lasting success for Torch.AI and your teammates.
You'll have the opportunity to work on a wide range of projects, from designing and demonstrating early capabilities and prototypes to deploying large-scale mission systems.
You'll contribute directly to Torch.AI's continued position as a leader in data infrastructure AI in the market and compete against multi-billion-dollar incumbents and high-tech AI companies.
We develop solutions directly supporting our nation's warfighters, national security, and prosperity; the impact of your work is directly visible.
Critical Skills
B.S. degree in Computer Science or field or related field.
5+ years of experience in software development/ operations / data experience.
Knowledge of RESTful services, JSON parsing and querying.
Proven experience with Big Data technologies, including processing and analyzing large-scale datasets.
Proficient in NiFi for data integration, flow management, and system interoperability.
Basic familiarity with cloud storage and compute services such as AWS S3 and EC2 is essential.
Build and improve tools for application development and data science that enable users to build an end-to-end AI application quickly.
Collaborate closely with Product Management, User Interaction Designers, and Back-End Engineers.
Develop data models and analytics to meet mission requirements.
Test service artifacts for accuracy and performance.
Pull or receive command and control files.
Test services artifacts for correctness and/or performance.
What We Value
Entrepreneurial mindset.
Subject matter expertise and proven success within the market segment.
Ability to convey complex technical concepts to stakeholders.
Aptitude for working collaboratively in interdisciplinary teams.
Awareness of ethical considerations and responsible AI practices.
Capability to work collaboratively in interdisciplinary teams.
Hands-on experience Python or Java.
Understanding of security products focusing on IAM.
Knowledge on ETL focusing on mapping.
Ability to translate data between various formats including JSON, Parquet, Avro.
Knowledge in Spark, Kafka, and Airflow.
Excellent problem-solving skills, attention to detail, and ability to thrive in a fast-paced, collaborative environment.
Eligible for Top Secret security clearance.
Professional Ambiance
This role thrives in a cutting-edge, high-performance workspace.
Our base of operations is in Leawood, KS.
We have engineering and forward deployed mission teams in key cities across the U.S.
This is a full-time on-site role in our Kansas City headquarters.
Equity Program
All employees are eligible to participate in the company equity incentive program within their first 12 months of employment. We are proud that 100% of our employees are equity-owning partners at Torch.AI.
Incentives and Advantages
Competitive salary, performance bonus, and benefits package.
Opportunity to participate in Torch.AI's employee equity incentive program.
Unlimited PTO.
11 paid holidays each year.
Dynamic and energetic teammates.
Incredible chance for professional advancement in a rapidly scaling high-tech environment.
Weekly in-office catering in our Leawood HQ.
Access to company entertainment suite at the Kansas City T-Mobile Center, with tickets to all major events and concerts.
Exceptional medical, dental, and vision insurance.
Company sponsored life and disability coverage.
Relocation benefits.
Torch.AI is an Equal Opportunity /Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, age, sex, national origin, protected veteran status or status as an individual with a disability.
These positions are being reviewed and filled on a rolling basis, and multiple openings may be available for each role.
JOB CODE: 1000071
Data Warehouse Engineer
Data Scientist Job 9 miles from Grandview
Morton Salt is an iconic company with a strong heritage and a bright future. Since 1848, we have been improving lives and enhancing everyday moments - at home, at work and virtually everywhere in between. We help unlock the flavors in food, make roads and sidewalks safer, improve the water in baths, pools, and homes, and keep businesses and industries running. We are a dedicated team who constantly strives to do better together, and we are passionate about building a sustainable future for our company, the communities in which we operate, and the world around us. By joining our team, you will contribute to producing and delivering every form of salt that enhances everyday life.
Job Summary
We are seeking a detail-oriented Data Warehouse Engineer with expertise in ETL processes, SQL, Azure Data Factory, and Power Queries to join our growing Data & Analytics team. In this role, you will be responsible for designing, developing, and maintaining data pipelines, ensuring high-quality data integration and transformation to support business reporting and analytics. You will collaborate with cross-functional teams to understand data requirements, troubleshoot data issues, and continuously optimize our data infrastructure.
Duties and Responsibilities
ETL Development & Maintenance:
Design, build, and maintain ETL pipelines using SSIS and Azure Data Factory to extract, transform, and load data from various sources into our data warehouse and analytics platforms.
Create and maintain SQL queries to extract and transform data form various data sources.
Data Integration & Quality:
Work closely with data engineers, business analysts, and stakeholders to gather requirements and implement robust data solutions.
Monitor data quality and ensure data integrity throughout the ETL processes.
Identify and resolve data discrepancies, performance issues, and bottlenecks.
Reporting & Analysis:
Support business intelligence initiatives by preparing data extracts and performing ad hoc data analysis.
Collaborate with BI developers to design dashboards and reports that meet business needs.
Process Improvement:
Continuously review and improve ETL processes for scalability, efficiency, and reliability.
Stay current with industry trends and best practices in data integration, cloud data services, and analytics.
Documentation & Collaboration:
Document ETL processes, data mappings, and technical specifications.
Communicate effectively with team members and stakeholders to ensure alignment on project goals and timelines.
Knowledge, Skills, and Abilities
Bachelor's degree in Computer Science, Information Systems, Data Analytics, or a related field.
2-5 years of experience in data analysis or ETL development.
Hands-on experience with Azure Data Factory and Microsoft Azure data services (e.g., Azure SQL Database, Azure Blob Storage).
Familiarity with SQL, data modeling, and relational databases.
Experience with C#, VB, and scripting languages (e.g., JavaScript, Python) for data manipulation is a plus.
Experience with Power Queries, Power BI Dataflows, and Semantic Models.
Strong analytical and problem-solving skills.
Excellent communication and documentation abilities.
Ability to work independently as well as in a collaborative team environment.
Detail-oriented mindset with a focus on data accuracy and quality.
At Morton Salt, we work best when we work as a team, when we treat one another with dignity and respect, and value the unique contributions of others. We are committed to equal employment opportunity and prohibit discrimination and harassment based on race, national origin, sex, religion, color, disability, marital status, protected veteran status, sexual orientation, gender identity, gender expression, genetic information, citizenship, or any other characteristic protected by law.
Sr. Data Engineer
Data Scientist Job 17 miles from Grandview
Employment Type: Full-Time, Mid-level Department: Business Intelligence CGS is seeking a passionate and driven Data Engineer to support a rapidly growing Data Analytics and Business Intelligence platform focused on providing solutions that empower our federal customers with the tools and capabilities needed to turn data into actionable insights. The ideal candidate is a critical thinker and perpetual learner; excited to gain exposure and build skillsets across a range of technologies while solving some of our clients' toughest challenges.
CGS brings motivated, highly skilled, and creative people together to solve the government's most dynamic problems with cutting-edge technology. To carry out our mission, we are seeking candidates who are excited to contribute to government innovation, appreciate collaboration, and can anticipate the needs of others. Here at CGS, we offer an environment in which our employees feel supported, and we encourage professional growth through various learning opportunities.
Skills and attributes for success:
-Complete development efforts across data pipeline to store, manage, store, and provision to data consumers.
-Being an active and collaborating member of an Agile/Scrum team and following all Agile/Scrum best practices.
-Write code to ensure the performance and reliability of data extraction and processing.
-Support continuous process automation for data ingest.
-Achieve technical excellence by advocating for and adhering to lean-agile engineering principles and practices such as API-first design, simple design, continuous integration, version control, and automated testing.
-Work with program management and engineers to implement and document complex and evolving requirements.
-Help cultivate an environment that promotes customer service excellence, innovation, collaboration, and teamwork.
-Collaborate with others as part of a cross-functional team that includes user experience researchers and designers, product managers, engineers, and other functional specialists.
Qualifications:
-Must be a US Citizen.
-Must be able to obtain a Public Trust Clearance.
-7+ years of IT experience including experience in design, management, and solutioning of large, complex data sets and models.
-Experience with developing data pipelines from many sources from structured and unstructured data sets in a variety of formats.
-Proficiency in developing ETL processes, and performing test and validation steps.
-Proficiency to manipulate data (Python, R, SQL, SAS).
-Strong knowledge of big data analysis and storage tools and technologies.
-Strong understanding of the agile principles and ability to apply them.
-Strong understanding of the CI/CD pipelines and ability to apply them.
-Experience with relational database, such as, PostgreSQL.
-Work comfortably in version control systems, such as, Git Repositories.
Ideally, you will also have:
-Experience creating and consuming APIs.
-Experience with DHS and knowledge of DHS standards a plus.
-Candidates will be given special consideration for extensive experience with Python.
-Ability to develop visualizations utilizing Tableau or PowerBI.
-Experience in developing Shell scripts on Linux.
-Demonstrated experience translating business and technical requirements into comprehensive data strategies and analytic solutions.
-Demonstrated ability to communicate across all levels of the organization and communicate technical terms to non-technical audiences.
Our Commitment:
Contact Government Services (CGS) strives to simplify and enhance government bureaucracy through the optimization of human, technical, and financial resources. We combine cutting-edge technology with world-class personnel to deliver customized solutions that fit our client's specific needs. We are committed to solving the most challenging and dynamic problems.
For the past seven years, we've been growing our government-contracting portfolio, and along the way, we've created valuable partnerships by demonstrating a commitment to honesty, professionalism, and quality work.
Here at CGS we value honesty through hard work and self-awareness, professionalism in all we do, and to deliver the best quality to our consumers mending those relations for years to come.
We care about our employees. Therefore, we offer a comprehensive benefits package:
-Health, Dental, and Vision
-Life Insurance
-401k
-Flexible Spending Account (Health, Dependent Care, and Commuter)
-Paid Time Off and Observance of State/Federal Holidays
Contact Government Services, LLC is an Equal Opportunity Employer. Applicants will be considered without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran.
Join our team and become part of government innovation!
Explore additional job opportunities with CGS on our Job Board:
*************************************
For more information about CGS please visit: ************************** or contact:
Email: *******************
$144,768 - $209,109.33 a year
Sr Data Scientist
Data Scientist Job 17 miles from Grandview
Our Firm American Century Investments is a leading global asset manager focused on delivering investment results and building long-term client relationships while supporting research that can improve human health and save lives. Founded in 1958, the firm's 1,400 employees serve financial professionals, institutions, corporations, and individual investors, offering a wide range of investment strategies across a variety of investment disciplines.
We are committed to providing institutional-quality, actively managed solutions with a performance-centered mindset. Our expertise spans global growth equity, global value equity, disciplined equity, multi-asset strategies, global fixed income, alternatives, and ETFs.
Privately controlled and independent, we focus solely on investment management. This empowers us to align our decisions with client expectations and concentrate on their long-term money management needs.
Our culture of winning behaviors exemplifies our dedication to clients every single day. Delivering investment results enables us to distribute over 40% of our dividends-more than $1.8 billion - to the Stowers Institute for Medical Research, a 500-person, non-profit basic biomedical research organization with a controlling interest in American Century Investments. Our dividend payments provide ongoing financial support for the Institute's work of uncovering the causes, treatments, and prevention of life-threatening diseases, like cancer.
For more information, please visit americancentury.com.
The Sr. Data Scientist on the Client Insights and Analysis Team will serve as a primary insights provider for the Wealth Management business channel. The primary responsibility of the Sr. Data Scientist on the Client Insights and Analysis Team is to work with business decision-makers to identify business issues and objectives, explore data mining and model building opportunities, build statistical models, present analytical results and models, and provide actionable insights. The Sr. Data Scientist on the Client Insights and Analysis Team will act as a senior member of the team who will assist his/her manager in exploring analytical opportunities, consulting on analytical work of the team and being a coach for other team members. Strong entrepreneurial spirit, interpersonal skills, consultation skills, client management skills, and analytical skills are key to the success for the Sr. Data Scientist role on the Client Insights and Analysis team.
Major Responsibilities and Accountabilities:
Actively participate in client-based cross-functional teams: represent data science profession on client-based cross-functional teams; identify business issues and resulting research needs; participate in client team project planning; fulfill identified team responsibilities: (10%)
Consult with decision-makers to identify business objectives, data sources and analytical methodology: work independently or with other analysts to identify and define business issues and goals, convert them into appropriate analytical objectives and strategies. Will explore and understand all internal data sources and will have a clear understanding of methodology behind the analysis and model building: (30%)
Provide research design and analysis, engage in data science and model building: based on identified needs, analyze database information or 3rd party data sources in response to data analysis requests. Identify and implement appropriate design and analysis, explore data sources to uncover useful information to help business clients in their decision making. Primarily will use Python and the AWS tech stack (Athena, Sagemaker, Glue, Redshift, etc.) as well as a variety of other data and analysis software. Activities include, but are not limited to: (50%)
* Create research design, measurement for clients in business execution
* Provide analysis reports and give proper interpretation to clients
* Build models to meet clients' objective using available data sources
* Report analytical results and recommendations to business clients
* Provide support for Client Insights and Analysis team; consult on data science and model building issues
* Help team development, be a coach for other analysts
* Act as single point of contact for business partner and address their needs with minimal guidance from manager
* Responsible for training and maintaining scoring mechanisms that drive our Next Best Action/Lead Management programs including Client Engagement, Interest and Lifetime Value Scoring
* Act as data steward for critical Distribution data sets such as those provided by key firm partners
Administrative: participate in professional development: (10%)
Required Skills:
* Ability to work independently and as a member of a team.
* Ability to analyze data and translate results into actionable insights. Excellent problem solving skills essential.
* Ability to communicate clearly and concisely, both verbally and in written form.
* Ability to prioritize work and manage multiple requests concurrently.
* Highly motivated self-starter with the ability to determine priorities, to plan, organize and follow through on assignments. Accuracy and attention to detail are essential.
* Must be customer focused.
* Ability to learn technical software applications and statistical techniques.
* Professional attitude and appearance.
* Ability to translate business issues into data analysis objectives.
* Understands latest techniques in statistical data analysis.
Required Experience:
* Master's or PhD degree in Statistics, Mathematics, Computer Science, or other analytical related field required.
* Excellent problem solving, analytical and quantitative analysis skills.
* Experience with Python or other statistical software language/application.
* Experience with AWS and its Data Science stack (Sagemaker, Glue, Athena, etc.)
* Experience with SQL in data warehouse environment.
* Working knowledge of Microsoft Office applications.
* Minimum 5 years data analysis experience.
* Excellent interpersonal, verbal and written communication skills.
* Experience working with senior management in presenting reports/making recommendations.
* Models the American Century Investments Winning Behaviors: Client Focused, Courageous and Accountable, Collaborative, Curious and Adaptable, Competitively Driven, Adheres to the highest ethical standards and business practices, and Supports a culture of compliance
Preferred:
* Knowledge of Financial Services industry or Direct to Consumer, ideally of the Asset Management space.
* 5+ years of experience building and implementing statistical models.
* 2+ years of experience managing projects, programs or vendors.
For New York based candidates, the salary range for this role is $139,000.00 - $167,000.00. All offers are based on various factors including but not limited to a candidate's location, skills, experience, and relevant education and/or training. This position is eligible for cash incentive providing the potential to earn more.
Additional Requirements:
Employees are required to be in the office on a scheduled frequency. Adherence to this schedule is essential to fulfilling the expectations of the role.
American Century Investments is committed to complying with the Americans with Disabilities Act and all other applicable Equal Employment Opportunity laws and regulations. As such, American Century strives to provide a reasonable accommodation to any qualified individual under the ADA to perform essential job functions.
American Century Investments believes all individuals are entitled to equal employment opportunity and advancement opportunities without regard to race, religious creed, color, sex, national origin, ancestry, physical disability, mental disability, medical condition, genetic information, marital status, gender, gender identity, gender expression, age for individuals forty years of age and older, military and veteran status, sexual orientation, and any other basis protected by applicable federal, state and local laws. ACI does not discriminate or adopt any policy that discriminates against an individual or any group of individuals on any of these bases.
#LI-Hybrid
2019 American Century Proprietary Holdings, Inc. All rights reserved.
Actuary
Data Scientist Job 17 miles from Grandview
**Summary /** **O** **bjectiv** **e** Serve as the technical expert and project manager for the annual funding, accounting, and compliance requirements for defined benefit pension plans. **R** **espo** **nsi** **bili** **t** **ies** + Leverage tools and resources to provide clients with innovative and effective solutions
+ Assist in the development and delivery of quality client communication and deliverables in support of Senior Consultants
+ Prepare and manage internal project deadlines
+ Prepare, technically review, and serve as primary contact to clients on annual service offerings
+ Review pension benefit calculations in accordance with plan provisions, Internal Revenue Code, ERISA, and other legal regulations
+ Oversee annual employee pension statement projects
+ Assist in completing and managing non-routine client work such as plan design, plan termination, benefit statements, and other special projects
+ Assist in the development and training of student-level staff
**Skills & Qualifications**
- Bachelor degree in math, actuarial science, or other related field
- One or more credentials from the Society of Actuaries or Joint Board of Enrolled actuaries required
- If not fully credentialed, pursuit of FSA is required
- ProVal experience a strong plus
- Project management experience
- Strong communication skills, including client consulting and presentations
_Be aware of employment fraud. All email communications from Ascensus or its hiring managers originate ****************** ****************** email addresses. We will never ask you for payment or require you to purchase any equipment. If you are suspicious or unsure about validity of a job posting, we strongly encourage you to apply directly through our website._
Ascensus provides equal employment opportunities to all associates and applicants for employment without regard to ancestry, race, color, religion, sex, (including pregnancy, childbirth, breastfeeding and/or related medical conditions), gender, gender identity, gender expression, national origin, age, physical or mental disability, medical condition (including cancer and genetic characteristics), marital status, military or veteran status, genetic information, sexual orientation, criminal conviction record or any other protected category in accordance with applicable federal, state, or local laws ("Protected Status").
Senior Data Engineer
Data Scientist Job 17 miles from Grandview
EquipmentShare is Hiring a Senior Data Engineer Your role in our team
At EquipmentShare, we believe it's more than just a job. We invest in our people and encourage you to choose the best path for your career. It's truly about you, your future, and where you want to go.
We are looking for a Senior Data Engineer to help us continue to build the next evolution of our data platform in a scalable, performant, and customer-centric architecture.
Our main tech stack includes Snowflake, Apache Airflow, AWS cloud infrastructure (e.g., Kinesis, Kubernetes/EKS, Lambda, Aurora RDS PostgreSQL), Python and Typescript.
What you'll be doing
We are typically organized into agile cross-functional teams composed of Engineering, Product, and Design, which allows us to develop deep expertise and rapidly deliver high-value features and functionality to our next-generation T3 Platform.
You'll be part of a close-knit team of data engineers developing and maintaining a data platform built with automation and self-service in mind to support analytics and machine learning data products for the next generation of our T3 Fleet that enable end-users to track, monitor and manage the health of their connected vehicles and deployed assets.
We'll be there to support you as you become familiar with our teams, product domains, tech stack and processes - generally how we all work together.
Primary responsibilities for a Senior Data Engineer
Collaborate with Product Managers, Designers, Engineers, Data Scientists and Data Analysts to take ideas from concept to production at scale.
Design, build and maintain our data platform to enable automation and self-service for data scientists, machine learning engineers and analysts.
Design, build and maintain data product framework to support EquipmentShare application data science and analytics features.
Design, build and maintain CI/CD pipelines and automated data and machine learning deployment processes.
Develop data monitoring and alerting capabilities.
Document architecture, processes and procedures for knowledge sharing and cross-team collaboration.
Mentor peers to help them build their skills.
Why We're a Better Place to Work
We can promise that every day will be a little different with new ideas, challenges and rewards.
We've been growing as a team and we are not finished just yet- there is plenty of opportunity to shape how we deliver together.
Our mission is to enable the construction industry with tools that unlock substantial increases to productivity. Together with our team and customers, we are building the future of construction.
T3 is the only cloud-based operating system that brings together construction workflows & data from constantly moving elements in one place.
Competitive base salary and market leading equity package.
Unlimited PTO.
Remote first.
True work/life balance.
Medical, Dental, Vision and Life Insurance coverage.
401(k) + match.
Opportunities for career and professional development with conferences, events, seminars and continued education.
On-site fitness center at the Home Office in Columbia, Missouri, complete with weightlifting machines, cardio equipment, group fitness space, racquetball courts, a climbing wall, and much more!
Volunteering and local charity support that help you nurture and grow the communities you call home through our Giving Back initiative.
Stocked breakroom and full kitchen with breakfast and lunch provided daily by our chef and kitchen crew.
About You
You're a hands-on developer who enjoys solving complex problems and building impactful solutions. Most importantly, you care about making a difference.
Take the initiative to own outcomes from start to finish - knowing what needs to be accomplished within your domain and how we work together to deliver the best solution.
You are passionate about developing your craft - you understand what it takes to build quality, robust and scalable solutions.
You'll see the learning opportunity when things don't quite go to plan - not only for you but for how we continuously improve as a team.
You take a hypothesis-driven approach - knowing how to source, create and leverage data to inform decision making, using data to drive how we improve, to shape how we evaluate and make platform recommendations.
So, what is important to us?
Above all, you'll get stuff done. More importantly, you'll collaborate to do the right things in the right way to achieve the right outcomes.
7+ years of relevant data platform development experience building production-grade solutions.
Proficient with SQL and a high-order object-oriented language (e.g., Python).
Experience with designing and building distributed data architecture.
Experience building and managing production-grade data pipelines using tools such as Airflow, dbt, DataHub, MLFlow
Experience building and managing production-grade data platforms using distributed systems such as Kafka, Spark, Flink and/or others.
Familiarity with event data streaming at scale.
Proven track record learning new technologies and applying that learning quickly.
Experience building observability and monitoring into data products.
Motivated to identify opportunities for automation to reduce manual toil.
EquipmentShare is committed to a diverse and inclusive workplace. EquipmentShare is an equal opportunity employer and does not discriminate on the basis of race, national origin, gender, gender identity, sexual orientation, protected veteran status, disability, age, or other legally protected status.
#LI-Remote
Sr. Data Engineer
Data Scientist Job 20 miles from Grandview
Employment Type: Full-Time, Mid-level Department: Business Intelligence CGS is seeking a passionate and driven Data Engineer to support a rapidly growing Data Analytics and Business Intelligence platform focused on providing solutions that empower our federal customers with the tools and capabilities needed to turn data into actionable insights. The ideal candidate is a critical thinker and perpetual learner; excited to gain exposure and build skillsets across a range of technologies while solving some of our clients' toughest challenges.
CGS brings motivated, highly skilled, and creative people together to solve the government's most dynamic problems with cutting-edge technology. To carry out our mission, we are seeking candidates who are excited to contribute to government innovation, appreciate collaboration, and can anticipate the needs of others. Here at CGS, we offer an environment in which our employees feel supported, and we encourage professional growth through various learning opportunities.
Skills and attributes for success:
* Complete development efforts across data pipeline to store, manage, store, and provision to data consumers.
* Being an active and collaborating member of an Agile/Scrum team and following all Agile/Scrum best practices.
* Write code to ensure the performance and reliability of data extraction and processing.
* Support continuous process automation for data ingest.
* Achieve technical excellence by advocating for and adhering to lean-agile engineering principles and practices such as API-first design, simple design, continuous integration, version control, and automated testing.
* Work with program management and engineers to implement and document complex and evolving requirements.
* Help cultivate an environment that promotes customer service excellence, innovation, collaboration, and teamwork.
* Collaborate with others as part of a cross-functional team that includes user experience researchers and designers, product managers, engineers, and other functional specialists.
Qualifications:
* Must be a US Citizen.
* Must be able to obtain a Public Trust Clearance.
* 7+ years of IT experience including experience in design, management, and solutioning of large, complex data sets and models.
* Experience with developing data pipelines from many sources from structured and unstructured data sets in a variety of formats.
* Proficiency in developing ETL processes, and performing test and validation steps.
* Proficiency to manipulate data (Python, R, SQL, SAS).
* Strong knowledge of big data analysis and storage tools and technologies.
* Strong understanding of the agile principles and ability to apply them.
* Strong understanding of the CI/CD pipelines and ability to apply them.
* Experience with relational database, such as, PostgreSQL.
* Work comfortably in version control systems, such as, Git Repositories.
Ideally, you will also have:
* Experience creating and consuming APIs.
* Experience with DHS and knowledge of DHS standards a plus.
* Candidates will be given special consideration for extensive experience with Python.
* Ability to develop visualizations utilizing Tableau or PowerBI.
* Experience in developing Shell scripts on Linux.
* Demonstrated experience translating business and technical requirements into comprehensive data strategies and analytic solutions.
* Demonstrated ability to communicate across all levels of the organization and communicate technical terms to non-technical audiences.
Our Commitment:
Contact Government Services (CGS) strives to simplify and enhance government bureaucracy through the optimization of human, technical, and financial resources. We combine cutting-edge technology with world-class personnel to deliver customized solutions that fit our client's specific needs. We are committed to solving the most challenging and dynamic problems.
For the past seven years, we've been growing our government-contracting portfolio, and along the way, we've created valuable partnerships by demonstrating a commitment to honesty, professionalism, and quality work.
Here at CGS we value honesty through hard work and self-awareness, professionalism in all we do, and to deliver the best quality to our consumers mending those relations for years to come.
We care about our employees. Therefore, we offer a comprehensive benefits package:
* Health, Dental, and Vision
* Life Insurance
* 401k
* Flexible Spending Account (Health, Dependent Care, and Commuter)
* Paid Time Off and Observance of State/Federal Holidays
Contact Government Services, LLC is an Equal Opportunity Employer. Applicants will be considered without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran.
Join our team and become part of government innovation!
Explore additional job opportunities with CGS on our Job Board:
*************************************
For more information about CGS please visit: ************************** or contact:
Email: *******************
$144,768 - $209,109.33 a year