Data Scientist
Data Engineer Job 212 miles from Shreveport
We are a mid-sized Management Consulting, Automation, and Data/Process Science firm, established in 1993, serving Fortune 1000 companies throughout North America. We have developed a unique, template-based and data-centric approach to our client projects, which are conducted off-site from our Houston office. The Lab is proud to announce we have invested in a new office build out in the Galleria area. We are mindful of employee experience and currently operate at 50% capacity in the office.
We are seeking a data scientist who is passionate about business processes, automation, operational data measurement, and the intellectual challenge analyzing them offers. The person we seek has previous experience in successful data science roles, performing strategic analysis and, or, operations improvement projects. The data scientist will be part of a management consulting and data science team that performs analysis on client data and assists with the development, implementation and integration of pioneering solutions using different methods, techniques and tools. You will ensure the rigor and underlying logic of the team's findings, optimize the analytical storyline and develop superior, easy to comprehend documentation operational analysis.
The data scientist will be responsible for gathering, analyzing and documenting business processes, developing business cases, developing analytics dashboards and providing domain knowledge to the team. The ideal candidate's favorite words are learning, data, scale, and agility. You will leverage your strong collaboration skills and ability to extract valuable insights from highly complex data sets to ask the right questions and find the right answers.
Simultaneously, you will help senior management further standardize the consulting tasks and related work product with the objectives of: reducing analytical cycle time, lowering labor costs and reducing document rework and editing. As you become more familiar with our product offering, you will also contribute to the refinement and extension of our findings and tools database/website which includes benchmarks, best practices and thousands of business process maps.
Responsibilities
Interface with clients to gather operational data for analysis
Analyze raw data from consulting client projects across multiple industries: assessing quality, cleansing, structuring for downstream processing
Design accurate and scalable prediction algorithms
Collaborate with team to bring analytical prototypes to production
Generate actionable insights for business improvements
Work alongside clients and internal team members to develop interactive, customer-facing business dashboards
Work with internal team members to develop methods to transform data to prepare for analysis and reporting
Manage the structure and functionality of our internal databases
Maintain and build tools to assist our research teams in updating, organizing and expanding existing database content
Navigate client roadblocks that slow down projects
Proactively report to internal team and clients on overall progress, potential issues, areas of potential improvement, etc.
Qualifications
Bachelor's degree or equivalent experience in quantitative field (Statistics, Mathematics, Computer Science, Engineering, etc.)
At least 1 - 2 years' of experience in quantitative analytics or data modeling
Deep understanding of predictive modeling, machine-learning, clustering and classification techniques, and algorithms
Fluency in a programming language (Python, C,C++, Java, SQL)
Familiarity with Big Data frameworks and visualization tools (Cassandra, Hadoop, Spark, Tableau)
Data Engineer
Data Engineer Job 198 miles from Shreveport
Contract to hire
Hybrid in Flower Mound, Texas
Required Skills & Experience
5+ years of experience with Databricks
7+ years of experience as a Data Engineer
Experience with Fivetran or Azure Data Factory (client is using Fivetran)
Experience working in an Azure environment
Experience programming in python and/or scala
Experience with Delta Lakes
Nice to Have Skills & Experience
SAP
Fivetran (ETL Tool)
JAR - Java Archive files
Job Description
A client in Flower Mound, TX is looking for a Senior Data Engineer to help with a Databricks implementation. The client is doing a complete rebuild of the data platform (currently have big query and information until they move everything to Databricks). This Data Engineer will load data from SAP and go through the entire ETL process (the client is implementing a medallion architecture). We are looking for someone who has expeirence implementing unity catalogues and implementing data bricks. This will be a 1099 contract-hire role (you will be a 1099 contractor with the client directly for 3-6 months before converting to a full time employee). This is a hybrid role in Flower Mound, TX
Compensation:
$130,000/yr to $160,000/yr.
Exact compensation may vary based on several factors, including skills, experience, and education.
Employees in this role will enjoy a comprehensive benefits package starting on day one of
employment, including options for medical, dental, and vision insurance. Eligibility to enroll in
the 401(k) retirement plan begins after 90 days of employment. Additionally, employees in this
role will have access to paid sick leave and other paid time off benefits as required under the
applicable law of the worksite location.
Data Scientist
Data Engineer Job 281 miles from Shreveport
Min. required skills and qualifications:
Bachelor's or Master's degrees in engineering, Mathematics, Statistics, Computer Science, philosophy or a related field.
Strong problem-solving and analytical skills.
Entrepreneurial mindset with a strong sense of curiosity, proactiveness, and a self-starter
Passion for data-driven decision-making and a desire to learn.
Understanding basic statistical modeling and data visualization.
Candidates must be authorized to work without sponsorship. Visa sponsorship is not available for this role.
The position is an Austin-based, onsite role.
Key Job Responsibilities - Job role 1 (1 to 3 yrs exp)
Collaborate with clients to understand business challenges and develop data-driven solutions.
Analyze large datasets to uncover trends, patterns, and actionable insights for strategic decision-making.
Develop and implement statistical models, machine learning algorithms, and AI-driven solutions.
Experience with SAP ERP systems, including data extraction, analytics, and reporting.
Proficiency in Python, R, or SQL for data analysis and machine learning.
Understanding of Supply Chain business with good PP/MM knowledge will be good.
Additional requirements for Job role 2 (1 to 3 yrs exp)
Skilled in data wrangling, cleansing, and transformation of inventory-related datasets from SAP modules on Snowflake.
Experience in building scalable pipelines, API-based data access, and workflow automation.
Experience working with supply chain and operations teams to align data strategies with business objectives.
Experience with inventory management, safety stock optimization, and SKU-level inventory flow analytics is a PLUS.
Hadoop and Spark Developer
Data Engineer Job 175 miles from Shreveport
Plano TX
At least 6 years of experience in Hadoop, Spark, Scala/Python
Good experience in end-to-end implementation of data warehouse and data marts
Strong knowledge and hands-on experience in SQL, Unix shell scripting
Preferred Qualifications:
Good understanding of data integration, data quality and data architecture
Experience in Relational Modeling, Dimensional Modeling and Modeling of Unstructured Data
Good understanding of Agile software development frameworks
Experience in Banking domain
Data Engineer (Azure Databricks)
Data Engineer Job 179 miles from Shreveport
At least 10 years of experience building and leading highly complex, technical engineering teams.
Strong hands-on experience in Databricks
Implement scalable and sustainable data engineering solutions using tools such as Databricks, Azure, Apache Spark, and Python. The data pipelines must be created, maintained, and optimized as workloads move from development to production for specific use cases.
Experience managing distributed teams preferred.
Comfortable working with ambiguity and multiple stakeholders.
Comfortable working cross functionality with product management and directly with customers; ability to deeply understand product and customer personas.
Expertise on Azure Cloud platform
Good SQL knowledge
Knowledge on orchestrating workloads on cloud
Ability to set and lead the technical vision while balancing business drivers
Strong experience with PySpark, Python programming
Proficiency with APIs, containerization and orchestration is a plus
Experience handling large and complex sets of data from various sources and databases
Solid grasp of database engineering and design principles.
Experience with Unity Catalog.
Familiarity with CI/CD methods desired
Good to have Teradata Experience (not Mandatory)
Data Engineer - Abu Dhabi Hedge Fund ($250k - $450k)
Data Engineer Job 212 miles from Shreveport
We've all thought about making the jump to Dubai or Abu Dhabi for obvious reasons, but there never is the perfect opportunity to do this as a data engineer.
Well this unique hedge fund is looking to relocate talented data engineers into Abu Dhabi to join and accelerate an ever growing hedge fund.
Who do you need to be?
You're either:
A Data Engineer within the Hedge Fund/ Trading space
Coming from an outstanding educational background, and work with complex data sets
If this is you - get in touch.
Tech stack is open.
Data Engineer
Data Engineer Job 189 miles from Shreveport
Mandatory Certification should be in Databricks Certified Associated Developer for Apache Spark 3.0
This role is for Quality Engineering team where 70 of effort will be for developing automation frameworks for testing Remaining 30 effort will be on manual testing until its fully automated
Exp 3 to 6 Years Must have good technical experience and should be able to provide technical solutions for multiple modules in parallel on need basis and bring the task to closure on time
At least 2 years of development work experience in Hadoop programming HDFS using pyspark Hive based Data warehouse projects and with good Shell Scripting experience
Responsible for understanding requirements and developing automation solutions to validate the data loaded by the Development team
Excellent communication and documentation skill Very good at team playing and flexibility to work in different time zones based on project needs
Should be able to work independently Prior Experience to any Databases like Oracle SQL Server ETL will be added advantage
Nice to Have DQ Experience Autosys Jenkin RLM etc
Proactive and have good communication skills to articulate technical issues
Exposure to Confluence JIRA
Skills
Mandatory Skills : Big Data Hadoop Ecosystem, Python, SparkSQL
Benefits/perks listed below may vary depending on the nature of your employment with LTIMindtree (“LTIM”):
Benefits and Perks:
Comprehensive Medical Plan Covering Medical, Dental, Vision
Short Term and Long-Term Disability Coverage
401(k) Plan with Company match
Life Insurance
Vacation Time, Sick Leave, Paid Holidays
Paid Paternity and Maternity Leave
The range displayed on each job posting reflects the minimum and maximum salary target for the position across all US locations. Within the range, individual pay is determined by work location and job level and additional factors including job-related skills, experience, and relevant education or training. Depending on the position offered, other forms of compensation may be provided as part of overall compensation like an annual performance-based bonus, sales incentive pay and other forms of bonus or variable compensation.
Disclaimer: The compensation and benefits information provided herein is accurate as of the date of this posting.
LTIMindtree is an equal opportunity employer that is committed to diversity in the workplace. Our employment decisions are made without regard to race, color, creed, religion, sex (including pregnancy, childbirth or related medical conditions), gender identity or expression, national origin, ancestry, age, family-care status, veteran status, marital status, civil union status, domestic partnership status, military service, handicap or disability or history of handicap or disability, genetic information, atypical hereditary cellular or blood trait, union affiliation, affectional or sexual orientation or preference, or any other characteristic protected by applicable federal, state, or local law, except where such considerations are bona fide occupational qualifications permitted by law.
Safe return to office:
In order to comply with LTIMindtree' s company COVID-19 vaccine mandate, candidates must be able to provide proof of full vaccination against COVID-19 before or by the date of hire. Alternatively, one may submit a request for reasonable accommodation from LTIMindtree's COVID-19 vaccination mandate for approval, in accordance with applicable state and federal law, by the date of hire. Any request is subject to review through LTIMindtree's applicable processes.
Senior Data Engineer
Data Engineer Job 179 miles from Shreveport
City: Las Vegas, NV or Dallas, TX or Atlanta, GA
Onsite/ Hybrid/ Remote: Onsite
Work Authorization: US Citizens ONLY
Fulltime
Client is seeking a highly skilled Senior Data Engineer to fill a full-time role for our client. The Senior Data Engineer is responsible for designing, building, and managing robust data platforms and tools to enable efficient processing and analysis of large-scale data. This role involves developing and maintaining scalable data pipelines, ensuring data quality, and deploying machine learning models into production. The position requires close collaboration with business teams to enhance data models for business intelligence tools, driving accessibility and fostering data-driven decisions across the organization. Responsibilities include creating real-time and batch pipelines to handle high data volumes and translating business requirements into scalable data solutions through collaboration with cross-functional teams.
MUST Have:
Expertise in Python, SQL and CI/CD practices.
Hands-on experience with Databricks, Spark, and Fabric or equivalent tools.
Experience in data migration using Azure services
Key Responsibilities
Design, develop, and maintain real-time and batch data pipelines to process and analyze large datasets efficiently.
Create and maintain tools to ingest, curate, and provision first-party and third-party data for analytics, reporting, and data science.
Develop advanced data products and intelligent APIs while monitoring system performance, troubleshooting, and integrating new features.
Analyze data and design architecture to support business intelligence, AI/ML, and data products.
Implement data platform architectures that meet analytical requirements, emphasizing scalability, maintainability, and flexibility.
Provide technical leadership, mentoring, and code reviews, with a focus on test-driven development and CI/CD practices.
Qualifications
Experience
8+ years of experience as a data engineer with full-stack capabilities.
10+ years of programming experience.
5+ years of experience in cloud technologies (Azure, AWS, or Google Cloud).
Expertise in Python, SQL and CI/CD practices.
Hands-on experience with Databricks, Spark, and Fabric or equivalent tools.
Proficiency in designing and developing data ingestion, processing, and analytical pipelines for big data, NoSQL, and data warehouses.
Experience in data migration using Azure services (e.g., ADLS, Azure Data Factory, Event Hub, Databricks).
Advanced knowledge of big data and streaming technologies (e.g., Apache Spark, Kafka).
Skills
Strong understanding of data architecture, data modeling, and data security best practices.
Proficiency with BI tools like Power BI and Tableau.
Knowledge of developing intelligent applications and APIs.
Experience with RESTful APIs and messaging systems.
Critical thinking, problem-solving, and process improvement abilities.
Strong organizational and communication skills to work effectively in dynamic, fast-paced environments.
Preferred Skills
Experience with machine learning and ML pipelines.
Familiarity with Agile methodologies.
Proven ability to create technical documentation and deliver impactful presentations.
Education
Bachelor's degree in Computer Science, Information Systems, Data Science, Engineering, or a related field (required).
Master's degree in a relevant field (preferred).
Compensation & Benefits:
Base pay: 150k to 175k Dependent on experience
+Bonus, Vacation, Medical etc.
Senior Data Engineer
Data Engineer Job 179 miles from Shreveport
We are seeking an experienced Data Engineer to expand and optimize our data pipeline architecture while improving data flow and collection across cross-functional teams. The ideal candidate is a skilled data pipeline builder and data wrangler who thrives on developing efficient data systems from the ground up.
In this role, you will collaborate with software developers, database architects, data analysts, and data scientists to support data initiatives and ensure a consistent, optimized data delivery architecture. You will play a key role in aligning data systems with business goals while maintaining efficiency across multiple teams and systems.
Responsibilities:
Design, build, and maintain scalable data pipelines.
Develop large, complex data sets to meet functional and non-functional business requirements.
Automate manual processes, optimize data delivery, and improve infrastructure scalability.
Build and manage ETL/ELT workflows for efficient data extraction, transformation, and loading from diverse sources.
Develop analytics tools that leverage data pipelines to provide actionable business insights.
Support data-related technical issues and optimize data infrastructure for various business functions.
Collaborate with data scientists and analysts to enhance data-driven decision-making.
Write complex SQL queries and develop database objects (e.g., stored procedures, views).
Implement data pipelines using tools like SSIS, Azure Data Factory, Hadoop, Spark, or other ETL/ELT platforms.
Maintain comprehensive documentation on data pipelines, processes, and workflows.
Requirements:
Bachelor's degree in Computer Science, Statistics, Informatics, Information Systems, or related field (or equivalent experience).
4-6 years of experience as a Data Engineer.
Strong proficiency in SQL and T-SQL programming.
Hands-on experience with ETL processes, data migration, and pipeline optimization.
Experience working with relational (SQL) and NoSQL databases.
Proficiency in Python or other object-oriented scripting languages.
Experience analyzing and processing large, disconnected datasets.
Expertise in Cloud technologies (Azure preferred; GCP or AWS also considered).
Familiarity with Snowflake (a plus).
Experience with message queuing and stream processing tools (Pub-Sub, Azure Event Grid, Kafka, etc.).
Knowledge of data pipeline and workflow management tools (Airflow, Prefect, Apache NiFi, etc.).
Experience with machine learning is a plus.
Strong analytical and problem-solving skills.
Ability to work in an Agile development environment.
If you're a data-driven professional passionate about building scalable, high-performance data infrastructure, we'd love to hear from you!
Apply now!
CPG Domain SME - Data Engineering
Data Engineer Job 175 miles from Shreveport
About the Company:
Everest DX - We are Digital Platform Services company, headquartered in Stamford. Our Platform/Solution includes Orchestration, Intelligent operations with BOTs', AI-powered analytics for Enterprise IT. Our vision is to enable Digital Transformation for enterprises to deliver seamless customer experience, business efficiency and actionable insights through an integrated set of futuristic digital technologies.
Digital Transformation Services - Specialized in Design, Build, Develop, Integrate, and Manage cloud solutions and modernize Data centers, build a Cloud-native application and migrate existing applications into secure, multi-cloud environments to support digital transformation. Our Digital Platform Services enable organizations to reduce IT resource requirements and improve productivity, in addition to lowering costs and speed digital transformation.
Digital Platform - Cloud Intelligent Management (CiM) - An Autonomous Hybrid Cloud Management Platform that works across multi-cloud environments. helps enterprise Digital Transformation get most out of the cloud strategy while reducing Cost, Risk and Speed.For more information, please visit **************************
Role Overview:
We are looking for an experienced Data Architect to design and implement data solutions on Azure Cloud. The role involves leading data migration and transformation projects, engaging with customers, driving business growth through proposals and POCs, and building internal data competency.
Qualifications:
Experience: 8+ years in data architecture, with 3+ years on Azure Cloud.
Domain experience required : Consumer Packaged Goods
Technical Skills: Proficiency in Azure Data Services (Data Lake, Synapse, Data Factory, etc.), ETL, and data governance.
Soft Skills: Strong communication, leadership, and stakeholder management abilities.
Preferred: Azure certifications and experience in customer engagement or business development.
Key Responsibilities:
Design scalable, secure, and efficient data architectures on Azure Cloud.
Lead data migration and transformation initiatives, ensuring seamless integration and performance optimization.
Engage with customers to gather requirements, propose solutions, and lead technical discussions.
Develop proposals and execute POCs to showcase solution feasibility and drive business growth.
Build organizational data capabilities through mentoring, training, and establishing best practices.
Ensure compliance with data governance, security, and privacy regulations.
Collaborate with cross-functional teams to align data architecture with business goals.
Monitor and optimize Azure data systems for performance and scalability.
Stay updated on industry trends to deliver innovative and cutting-edge solutions.
Education: Bachelor's Degree and equivalent work experience.
Location: Plano , TX ( Onsite )
Job Type: Fulltime
Equal Opportunity & Diversity Statement
At EverestDX, we are proud to be an equal opportunity employer committed to fostering diversity and inclusion in the workplace. Our employment decisions are based on merit, qualifications, and business needs, without regard to race, color, creed, religion, sex (including pregnancy, childbirth, or related medical conditions), gender identity or expression, national origin, ancestry, age, family-care status, veteran status, marital status, civil union status, domestic partnership status, military service, handicap or disability, or any other characteristic protected by applicable federal, state, or local law.
We believe that a diverse and inclusive workforce is essential for driving innovation, creativity, and success. We value the unique perspectives, backgrounds, and talents that each individual brings to our organization. Our commitment to equal opportunity extends to all aspects of employment, including recruitment, hiring, promotions, transfers, discipline, compensation, benefits, and training.
We recognize and appreciate the importance of providing a workplace where everyone feels valued, respected, and supported. We strive to create an environment that promotes fairness, equality, and opportunities for professional growth. Additionally, we comply with all applicable laws and regulations regarding equal employment opportunities and nondiscrimination.
EverestDX is an equal opportunity employer, dedicated to diversity and inclusion in every aspect of our workplace.
Data Engineer
Data Engineer Job 189 miles from Shreveport
Immediate need for a talented Data Engineer. This is a 06+months contract opportunity with long-term potential and is located in Irving, TX, Tampa, FL, New York, NY(Hybrid). Please review the job description below and contact me ASAP if you are interested.
Job ID:25-62368
Pay Range: $45 - $47/hour. Employee benefits include, but are not limited to, health insurance (medical, dental, vision), 401(k) plan, and paid sick leave (depending on work location).
Key Requirements and Technology Experience:
Key skills; Scala + spark + Java
Bigdata, Spark, Scala/Java, Hive, Impala
Strong SQL skills including analysis, optimization
Good to have: Iceberg, Ozone, Kafka, ML
Our client is a leading IT Industry, and we are currently interviewing to fill this and other similar contract positions. If you are interested in this position, please apply online for immediate consideration.
Pyramid Consulting, Inc. provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
By applying to our jobs you agree to receive calls, AI-generated calls, text messages, or emails from Pyramid Consulting, Inc. and its affiliates, and contracted partners. Frequency varies for text messages. Message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You can reply STOP to cancel and HELP for help. You can access our privacy policy here.
Product Analytics - Data Scientist
Data Engineer Job 196 miles from Shreveport
Great American Media is home to the family-friendly portfolio of brands Great American Family, Great American Faith & Living, and Great American Pure Flix. We are dedicated to our audience, and we believe that strong brands and high-quality programming create an unparalleled fan experience across the Great American Media portfolio.
We are seeking a dynamic data professional with sharp business acumen, outstanding analytical abilities, and a demonstrated talent for effectively communicating actionable insights to join the team as a Product Analytics - Data Scientist.
Ideal candidates will be able to work a hybrid schedule at our Texas Headquarters in Arlington, New York City office, or Scottsdale, AZ office (typically onsite Tues/Wed/Thurs and remote Mon/Fri).
The Product Analytics - Data Scientist has experience working with diverse data sets and is passionate about transforming data into actionable insights. Our team plays a key role in driving business performance by providing insights to inform decision-making and identifying opportunities for growth. The Product Analytics - Data Scientist will play a critical role in using data to shape product strategies, improve user experience, and drive business outcomes.
Duties and Responsibilities
Data-Driven Product Strategy: Analyze product performance metrics (e.g., user engagement, adoption, churn) to uncover trends, generate actionable insights, and recommend strategies to optimize features and enhance the user experience.
Impact on Business Outcomes: Leverage data to shape product strategies that improve user experience, align with business objectives, and drive measurable outcomes.
Storytelling with Data: Translate data insights into compelling narratives and actionable recommendations, presenting to executive audiences on user growth, marketing performance, and product adoption.
Strategic Collaboration: Partner with business leaders, product managers, and stakeholders to align roadmaps, prioritize initiatives, and identify opportunities for improvement using data-driven insights.
Forecasting and Predictive Modeling: Develop models to predict user behavior, demand trends, and potential risks, providing guidance for strategic decision-making.
Experimentation and A/B Testing: Design, execute, and analyze A/B tests with statistical rigor to validate hypotheses and support data-driven decisions.
Performance Reporting: Provide regular performance reports (weekly, monthly, quarterly) to track progress and inform stakeholders.
Ad Hoc Analyses: Respond to ad hoc analysis requests from leadership and key stakeholders to address critical business questions.
Metrics and Dashboards: Create new metrics, dashboards, and insights to monitor activities, optimize performance, and scale outcomes effectively.
Perform other duties as assigned.
Qualifications
Bachelor's in Business, Statistics, Computer Science, Economics, Marketing, MBA, Engineering, Mathematics, Finance
3-5 years of work experience in Data Science, Marketing Analytics, Product Analytics Consulting, Decision Science and/or Business Analytics
3-5 years in data querying languages (e.g., SQL) and scripting languages (e.g., Python), with experience using big data technologies (e.g., BigQuery, Hive, Hadoop, Spark) to manipulate and analyze large datasets.
3-5 years of experience in building business facing visualizations and dashboards (e.g., Domo, Tableau, Looker)
Strong communication skills and ability to craft and present data driven analyses and insights to articulate performance and opportunities to key stakeholders.
Experience working with multiple marketing and digital marketing teams directly to influence and shape campaign measurement, analytics frameworks and drive decision-making
Experience in measuring digital marketing KPIs, ROI, marketing effectiveness, LTV, ROAS, sentiment analysis, etc.
Knowledge of statistics and optimization techniques. Hands-on experience with large datasets (i.e. data extraction, cleaning, modelling, interpretation and presentation)
Ability and enthusiasm to work in a fast-paced environment where deadlines can sometimes be tight, and stakeholder needs immediate.
Self-motivated and results-driven.
Preferred Qualifications
Master's degree in Business, Statistics, Computer Science, Economics, Marketing, MBA, Engineering, Mathematics, Finance
Experience managing and prioritizing multiple concurrent projects and driving initiatives in a large cross-functional environment
Experience in a consumer technology company
Experience with Streaming Metrics supporting an SVOD business.
Experience with Amazon Web Services, particularly Kinesis, S3, Athena, Redshift, Glue
Working Conditions
This position is in an office environment and may require some travel. The hybrid schedule typically consists of being onsite Tuesday/Wednesday/Thursday and remote Monday/Friday.
Physical Requirements
To meet the essential requirements of this position, you must be able to:
Sit for long periods of time; more than 30 minutes
Stand for long periods of time; more than 30 minutes
Type and complete other meticulous tasks
Bend and reach
Lift up to 15 pounds at a time
Direct Reports
None.
Compensation, Benefits and Application Process
Competitive base salary commensurate with experience. We offer a comprehensive benefits package including:
401(k) retirement plan with employer match
Employer paid medical, dental and vision insurance
Employer paid STD and LTD
Employer paid life insurance and AD&D plus voluntary supplemental options
Pet Insurance
Comprehensive paid time off - vacation, sick leave and holidays
GAC Media, LLC is an Equal Opportunity Employer that is committed to diversity and inclusion in the workplace. We prohibit discrimination and harassment of any kind based on race, color, sex, religion, sexual orientation, national origin, disability, genetic information, pregnancy, or any other protected characteristic as outlined by federal, state, or local laws.
Big Data Developer
Data Engineer Job 175 miles from Shreveport
Required Qualifications:
Candidate must be located within commuting distance of Plano, TX or be willing to relocate to the area. This position may require travel to project locations.
Bachelor's degree or foreign equivalent required from an accredited institution. Will also consider three years of progressive experience in the specialty in lieu of every year of education.
At least 4 years of Information Technology experience
At least 3 years of experience in Hadoop, Spark, Scala/Python
Good experience in end-to-end implementation of data warehouse and data marts
Strong knowledge and hands-on experience in SQL, Unix shell scripting
All applicants authorized to work in the United States are encouraged to apply
Data Architect
Data Engineer Job 179 miles from Shreveport
Hiring on behalf of a client for the role of
Job Title: Data Architect
Responsibilities:
1. Develop and maintain end-to-end Data Engineering pipelines for deploying, monitoring, and scaling machine learning models.
2. Collaborate with data scientists, software engineers, and DevOps teams to ensure seamless integration of ML models into production systems.
3. Optimize model deployment processes by leveraging containerization technologies such as Docker or Kubernetes.
4. Implement continuous integration/continuous deployment (CI/CD) practices for ML model development lifecycle management.
5. Work closely with cross-functional teams to troubleshoot issues related to model performance or data quality in production systems.
Requirements:
1. Bachelor's degree in Computer Science or a related field; advanced degree preferred.
2. Minimum 7 years of experience working as Data Engineer or similar role within a data-driven organization.
3. Experience with data modelling and database design is mandatory.
4. Strong understanding of machine learning concepts and algorithms is good to have
5. Proficiency in Python developing data pipelines/scripts.
6. Strong experience with SQL and PySpark is essential
7. Familiarity with cloud platforms like AWS/Azure/GCP for building scalable infrastructure solutions is highly desirable
9. Experience with version control systems like Git/GitHub for managing code repositories
10. Excellent problem-solving skills with the ability to analyze complex technical issues related to Data model deployments.
Senior Data Architect
Data Engineer Job 212 miles from Shreveport
About the Role
We are seeking a Senior Data Architect to join our team, playing a critical role in building and supporting a core enterprise data & machine learning platform. This is a hands-on role requiring expertise in DevOps, MLOps, CI/CD, platform automation, and software engineering principles. The ideal candidate has a strong computer science background, has led teams in large-scale software development, and is comfortable working in a federated environment-balancing individual data analytics & ML use cases against enterprise architecture principles.
Responsibilities
Lead the development of a data & ML platform in collaboration with data science, engineering, and architecture teams.
Develop roadmaps & implementation plans for strategic business initiatives.
Evaluate and integrate new technologies to optimize platform efficiency.
Ensure best practices in CI/CD, DevSecOps, and MLOps automation.
Prototype and test new tools, frameworks, and automation techniques.
Advise on architectural trade-offs between speed, cost, and technical debt.
Work closely with senior stakeholders to align architecture with business goals.
What We're Looking For
Experience in Computer Science, Software Engineering, or related field.
Bachelor's degree in Computer Science (or equivalent experience).
Strong hands-on experience with Databricks.
Background in object-oriented design and software development.
Experience leading software development teams in large, multi-tenant environments.
Knowledge of DevOps, automation tools, and cloud platforms (AWS preferred).
Excellent communication and leadership skills to collaborate with executive stakeholders.
Work Environment
Hybrid schedule - 4 days in-office, 1 day remote.
Opportunity to work on cutting-edge AI, ML, and automation technologies.
Collaborative, fast-paced environment with high visibility projects.
This is a great opportunity for a technical leader who wants to drive enterprise-level innovation in data and machine learning while remaining hands-on with development, automation, and cloud technologies.
Want to learn more? Apply today!
Lead Data Scientist
Data Engineer Job 212 miles from Shreveport
180K-220K base salary
U.S. Citizen or Green Card Holders Only
On Site 4 Days a Week in Houston, TX
:
Our client operates in the automotive/retail/ supply chain space and is seeking a Lead Data Scientist with heavy logistics and supply chain industry experience. A background in working with logistics and supply chain data, as well as working on end-to-end data science projects is important for this position.
Role Description:
Translate business needs into analytics/reporting requirements for data-driven decisions.
Stay updated with the latest data science techniques and technologies, implementing innovative solutions to enhance data analysis and modeling capabilities.
Communicate complex data insights effectively to stakeholders across the organization, including non-technical audiences.
Manage use case design and build teams, providing guidance as they develop and operationalize data science models.
Ensure analytical insights and products are integrated into business processes.
Guide and approve analytics/modeling approach, model deployment requirements, and quality assurance standards.
Provide input to the long-term plan for the Data Science team, focusing on talent acquisition and technology platforms.
Foster a culture of innovation and continuous improvement, exploring and adopting new data science technologies and methodologies.
Work with business and technical stakeholders to identify new technologies and opportunities that deliver measurable business value.
Own the analytics solution portfolio, including model maintenance and improvements over time.
Skills and Experience:
Master's degree or PhD in Computer Science, Statistics, Applied Mathematics, or related field, with 7+ years experience in a data science or similar role
Industry experience working with logistics and supply chain data
Strong proficiency in Python and SQL
Extensive knowledge and working experience with Machine Learning libraries/frameworks, and data processing/visualization tools. (PyTorch, Tensorflow, NumPy, etc.)
Strong communication skills with experience presenting to stakeholders and breaking down technical topics to non-technical team members
Experience with cloud computing environments (AWS, Azure, GCP) and Data/ML platforms (Databricks, Spark)
Recent leadership experience with direct reports
Good understanding of programming best practices and building highly automated CI/CD pipelines
Data Scientist
Data Engineer Job 175 miles from Shreveport
Who we are:
ConnectedX is focused on Digital Transformation and Product Engineering Services, enabling clients achieve their business, operating and technology needs for the digital age. Our unique industry-based, consultative approach helps clients implement digital transformation initiatives. Headquartered in Dallas, Texas, U.S. ConnectedX is a preferred partner for leading Fortune 1000 enterprises and is consistently admired by the clients and employees across the globe.
Role: Data Scientist
Plano, TX (Hybrid)
Duration: 12+ Months Contract
What you bring:
• Strong experience manipulating data set and building statistical models, has master's or Ph.D. in statistics, mathematics with focus on ML, NLP, machine learning or another quantitative field.
• Strong Knowledge and experience in statistical and data mining techniques - GLM/regression, Random Forest, Boosting, text and data mining.
• Experience querying database and using statistical computer languages : R , Python, SQL etc.
• Experience creating and using advanced machine learning algorithms and statistics: regression, simulation, scenario analysis, modeling, clustering, decision trees, neural networks, etc.
• Experience with distributed data/computing tools: Map/Reduce, Hadoop, Hive, Spark, , MySQL, etc.
• Experience visualizing/presenting data for stakeholders using: Tableau, Business Objects, D3,
• Exp with Prescriptive analysis, predictive analysis.
• Exp leveraging big data and search technologies (e.g., Spark, Elastic Search, Natural Language Processing, Web Crawling) .
• Knowledge of machine learning techniques and algorithms and be able to apply them in data driven natural language processing techniques.
• Ability to quickly prototype ideas / solutions and perform critical analysis and using creative approaches for solving complex problems.
• Exp with Automotive analytics a PLUS
• Excellent written and verbal communication skills
Big Data Developer
Data Engineer Job 179 miles from Shreveport
Role: Big Data Engineer
Job Type: Hybrid
• Analyze and understand data sources & APIs
• Design and Develop methods to connect & collect data from different data sources
• Design and Develop methods to filter/cleanse the data
• Design and Develop SQL , Hive queries, APIs to extract data from the store
• Work closely with data Scientists to ensure the source data is aggregated and cleansed
• Work with product managers to understand the business objectives
• Work with cloud and data architects to define robust architecture in cloud setup pipelines and work flows
• Work with DevOps to build automated data pipelines
Total Experience Required
• 4
• The candidate should have performed client facing roles and possess excellent communication skills
Business Domain knowledge: Finance & banking systems, Fraud, Payments
Required Technical Skills
• Big Data-Hadoop, NoSQL, Hive, Apache Spark
• Python
• Java & REST
• GIT and Version Control
Desirable Technical Skills
• Familiarity with HTTP and invoking web-APIs
• Exposure to machine learning engineering
• Exposure to NLP and text processing
• Experience with pipelines, job scheduling and workflow management
Personal Skills
Experienced in managing work with distributed teams
• Experience working in SCRUM methodology
• Proven sense of high accountability and self-drive to take on and see through big challenges
• Confident, takes ownership, willingness to get the job done
• Excellent verbal communications and cross group collaboration skills
Software Engineer
Data Engineer Job 281 miles from Shreveport
Top 3 Requirements:
5+ years of experience with AWS Lambda & DynamoDB (NoSQL) in a serverless environment.
5+ years of Event-Driven Development (EDD) & Event-Driven Architecture (EDA) using AWS EventBridge & Step Functions.
5+ years of hands-on experience with ReactJS, Java, Spring Boot.
Key Responsibilities:
Design & develop components using Java (Spring Boot), AWS Lambda, EventBridge, Step Functions & DynamoDB.
Establish coding standards, conduct code reviews, and manage deployments.
Collaborate with cross-functional teams & provide technical expertise in Java and AWS cloud technologies.
Build & optimize frontend UX using ReactJS.
Use Cases (Pharmacy Industry Focus):
Customer: Search for medications, place orders, track deliveries, manage prescriptions.
Pharmacist: Verify prescriptions, manage inventory, dispense medications, provide support.
Delivery: Handle order transportation, update statuses, manage returns.
Skills & Technologies:
Required: Java, ReactJS, AWS Lambda, Spring Boot, EventBridge, Step Functions, DynamoDB, Jenkins, UX Design.
Additional: Microservices, Serverless Architecture, B2B Software, Coding Best Practices.
Java Software Engineer
Data Engineer Job 179 miles from Shreveport
Hexaware is a dynamic and innovative IT organization committed to delivering cutting-edge solutions to our clients worldwide. We pride ourselves on fostering a collaborative and inclusive work environment where every team member is valued and empowered to succeed.
Hexaware provides access to a vast array of tools that enhance, revolutionize, and advance professional profiles. We complete the circle with excellent growth opportunities, chances to collaborate with high-profile customers, opportunities to work alongside brilliant minds, and the perfect work-life balance.
With an ever-expanding portfolio of capabilities, we delve deep into and identify the source of our motivation. Although technology is at the core of our solutions, it is the people and their passion that fuel Hexaware's commitment to creating smiles.
“At Hexaware, we encourage individuals to challenge themselves to achieve their full potential and propel growth. We trust and empower them to disrupt the status quo and innovate for a better future. We promote an open and inspiring culture that fosters learning and brings talented, passionate, and caring people together.”
We are always interested in, and want to support, both the professional and personal aspects of our employees. We offer a wide array of programs to help expand skills and supercharge careers. We help discover passion-the driving force that makes one smile, innovate, create, and make a difference every day.
The Hexaware Advantage: Your Workplace Benefits
Excellent health benefits with low-cost employee premiums.
A wide range of voluntary benefits such as legal, identity theft, and critical care coverage.
Unlimited training and upskilling opportunities through Udemy and Hexavarsity.
Role - Java Cloud Developer
Location - Dallas, TX
Job Type - Hybrid
Job Description:
Mandatory skills required:: JAVA ,AWS ,Spring Boot, Microservices, Kubernetes and Experience on Blockchain or Crypto or Digital Product.
• Total experience required 12+yrs and minim experienced Java Cloud Engineer with 8+ years of experience and excellent communication.
• Experience on Blockchain or Crypto or Digital Product.
• Must have a strong background and hands on experience in designing and developing business applications using Java, Spring Boot, Microservices, Kubernetes and AWS.
• Must have proficiency in Java, Spring Boot, Kubernetes, Oracle, and AWS EKS, as well as familiarity with managed services like Lambda and DynamoDB.
• Must be proficient in CI/CD practices, container-based development, and have strong communication skills to drive and participate in meaningful discussions.
• Experience in Kafka to enable development of event-driven applications and handle high Transaction Per Second (TPS) traffic with low latency.
• Experience with building applications in high frequency, resilient transactional processing in a public cloud platform (AWS), achieving low latency, high scalability, and cost savings for the firm.
• Ability to do front end application development on Angular 16 and above when required.
• Experience with containerization Docker and orchestration with Kundernetes and familiarity with microservices architecture.
• Experience with the automated, functional, and regression testing using Java and Cucumber serenity framework.
Benefits offered by Hexaware:
Competitive Salary
Company Pension Scheme
Comprehensive Health Insurance
Flexible Work Hours and Hybrid Work Options
Paid annual holidays + public holidays.
Professional Development and Training Opportunities
Employee Assistance Program (EAP)
Diversity, Equity, and Inclusion Initiatives
Company Events and Team-Building Activities
Equal Opportunities Employer: Hexaware Technologies is an equal opportunity employer. We are dedicated to providing a work environment free from discrimination and harassment. All employment decisions at Hexaware are based on business needs, job requirements, and individual qualifications. We do not discriminate based on race including colour, nationality, ethnic or national origin, religion or belief, sex, age, disability, marital status, sexual orientation, parental status, gender reassignment, or any other status protected by law. We encourage candidates of all backgrounds to apply.