- To advance the organization by developing algorithms to build artificial inteligence and machine learning models that uncover connections and make better decisions without human intervention.
- Experimentation is at the core of what you do. The role is to work to turn business questions into data analysis effectively and provide meaningful recommendations. This is a unique hybrid role that will focus on your knowledge of data infrastructure and your ability to drive insights.
- Develop models and train them,
- Research on new technologies,
- Participate in recruitment process.
- Develop highly scalable systems, algorithms, and tools on one platform to support machine learning and deep learning solutions,
- Develop, integrate, and optimize end to end AI pipeline,
- Collect, analyze, and synthesize requirements and bottleneck in the technology, systems, and tools used by machine learning engineers and scientists, develop solutions that improve efficiency, leverage more amount of data efficiently,
- Adapt standard machine learning methods to best exploit modern parallel environments (e.g. distributed clusters, multicore SMP, GPU, TPU and FPGA),
- Explore state-of-the-art deep learning techniques,
- Partner with data science and domain engineering teams to support the business transformation through AI.
- University or advanced degree in engineering, computer science, mathematics, or a related field,
- 5+ years experience developing and deploying machine learning systems into production,
- Strong experience working with a variety of relational SQL and NoSQL databases,
- Strong experience working with big data tools: Hadoop, Spark, Kafka, or etc,
- Experience with at least one cloud provider solution (AWS, GCP, Azure),
- Strong experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, or etc,
- Ability to work in a Linux environment,
- Industry experience building innovative end-to-end Machine Learning systems,
- Ability to quickly prototype ideas and solve complex problems by adapting creative approaches,
- Experience working with distributed systems, service-oriented architectures and designing APIs,
- Strong knowledge of data pipeline and workflow management tools,
- Expertise in standard software engineering methodology, e.g. unit testing, test automation, continuous integration, code reviews, design documentation,
- Relevant working experience with Docker and Kubernetes is a big plus.
It's always a good idea to include the benefits of the job the company will provide such as:
- Flexible hours to give you freedom and increase productivity
- Life insurance for you and your family members
- Work remotely in the comfort of your home
- Free Gym membership so you can stay in shape
- Fun and energetic weekly team bonding events
Post the Job Now