Propellor.ai Careers

FOUNDED

2016

TYPE

Products & Services

SIZE

20-100 employees

STAGE

Raised funding

Why join us

Skin in the game

We believe that Individual and Collective success orientations both propel us ahead.
 

Cross Fertility

Borrowing from and building on one another’s varied perspectives means we are always viewing business problems with a fresh lens.
 

Sub 25's

A bunch of young turks, who keep our explorer mindset alive and kicking.
 

Future proofing

Keeping an eye ahead, we are up skilling constantly, staying relevant at any given point in time.
 

Tech Agile

Tech changes quickly. Whatever your stack, we adapt speedily and easily.

Jobs at Propellor.ai

Founded 2016  •  Products & Services  •  20-100 employees  •  Raised funding
PySpark
Data engineering
Big Data
Hadoop
Spark
Remote only
3 - 6 yrs
₹12L - ₹15L / yr


At Propellor.ai, we are building an exceptional team of Data engineers who are passionate developers and wants to push the boundaries to solve complex business problems using the latest tech stack. As a Data Engineer, you will work with various Technology and Business teams to deliver our Data Engineering offerings at large scale to our clients across the globe.

 

The person

  • Articulate
  • High Energy
  • Passion to learn
  • High sense of ownership
  • Ability to work in a fast-paced and deadline-driven environment
  • Loves technology
  • Problem solver
  • Must be able to see how the technology and people together can create stickiness for long term engagements

The Ask

Experience

3+ years of experience designing technological solutions to complex data problems, developing & testing modular, reusable, efficient and scalable code to implement those solutions. Ideally, this would include work on the following technologies:

  • Expert-level proficiency in PySpark/Python/Spark
  • Strong understanding and experience in distributed computing frameworks, particularly Apache Hadoop (YARN, MR, HDFS) and associated technologies - one or more of Hive, Sqoop, Avro, Flume, Oozie, Zookeeper, Impala, etc
  • Hands-on experience with Apache Spark and its components (Streaming, SQL, MLLib) is a strong advantage.
  • Operating knowledge of cloud computing platforms (AWS/Azure/GCP)
  • Experience working within a Linux computing environment, and use of command line tools including knowledge of Shell/Python scripting for automating common tasks
  • Ability to work in a team in an agile setting, familiarity with JIRA and clear understanding of how Git works or any version control tools

In addition, the ideal candidate would have great problem-solving skills, and the ability & confidence to hack their way out of tight corners.

 

Must Have (hands-on) experience:

  • Python and PySpark expertise
  • Distributed computing frameworks (Hadoop Ecosystem & Spark components)
  • Must be proficient in any Cloud computing platforms (AWS/Azure/GCP)
  • Experience in GCP (Big Query/Bigtable, Pub sub, Data Flow, App engine )/ AWS, Azure would be preferred
  • Linux environment, SQL and Shell scripting Desirable (would be a plus)
  • Statistical or machine learning DSL like R
  • Distributed and low latency (streaming) application architecture
  • Row store distributed DBMSs such as Cassandra, CouchDB, MongoDB, etc
  • Familiarity with API design


Education:

  • B.Tech. or Equivalent degree in CS/CE/IT/ECE/EEE


Role Description

  • The role would involve big data pre-processing & reporting workflows including collecting, parsing, managing, analyzing and visualizing large sets of data to turn information into business insights
  • Develop the software and systems needed for end-to-end execution on large projects
  • Work across all phases of SDLC, and use Software Engineering principles to build scalable solutions
  • Build the knowledge base required to deliver increasingly complex technology projects
  • You would be responsible for evaluating, developing, maintaining and testing big data solutions for advanced analytics projects
  • The role would also involve testing various machine learning models on Big Data, and deploying learned models for ongoing scoring and prediction.
  • An appreciation of the mechanics of complex machine learning algorithms would be a strong advantage.
  • Will be an integral part of client business development and delivery engagements
Read more
Job posted by
Anila Nair
Apply to job
Founded 2016  •  Products & Services  •  20-100 employees  •  Raised funding
Data Science
Machine Learning (ML)
Natural Language Processing (NLP)
Computer Vision
recommendation algorithm
Remote only
3 - 8 yrs
₹10L - ₹15L / yr

At Propellor.ai, we are committed to understand growth that leads to businesses making better use of their data. Through our dedication to using the latest tech stack, engineering methods and many more, we are committed to help companies put their customers at the centre of the business. You’ll work with teams that push the boundaries and build solutions. We value diversity and the existence of similarities and differences

We are growing at a fantastic rate. You will be able to continuously improve yourself and us – working on challenges that truly matter.  We are looking for a motivated, analytically-minded candidate who wants to be part of a start-up style environment. Ideally you have a broad skill set and is willing to take on (or teach yourself to take on) any challenge.  We are headquartered in Pune, you may choose remote working.


PURPOSE OF ROLE:

  • Understand and solve complex business problems with sound analytical prowess and help business with impactful insights in decision-making
  • To make sure any roadblocks in implementation is brought to the notice of relevant stakeholders so that project timelines do not get affected
  • Document every aspect of the project in standard way, for future purposes
  • Articulate technical complexities to the senior leadership in a simple and easy manner.


KEY TASKS AND ACCOUNTABILITIES:

  • Understand the business problem and translate that to a data-driven analytical/statistical problem; Own the solution building process
  • Create appropriate datasets and develop statistical data models
  • Translate complex statistical analysis over large datasets into insights and actions
  • Analyse results and present to stakeholders
  • Communicate the insights using business-friendly presentations
  • Help and mentor other Associate Data Scientists
  • Build a pipeline of the project which is production-ready
  • Build dashboards for easy consumption of solutions


BUSINESS ENVIRONMENT:

  • Work with stakeholders to understand their business problems, translate those problems into data-driven analytical solutions which can best address those business problems
  • Use data science/analytics prowess to provide answers to key business questions. Could be in any domain (e.g. Retail, Media, OTT)
  • Summarize insights and recommendations to be presented back to the business
  • Use innovative methods to continuously improve the quality of statistical models


SUCCESS CRITERIA:

  • Building an efficient model that helps the business stakeholder to measure outcomes of their decisions
  • High quality & production-ready codes written within timelines given
  • Following processes and adhering to documentation goals
  • High quality presentations
  • Questions from business stakeholders answered satisfactorily within agreeable time

EXPERIENCE:

  • Minimum 3 years in data science role with experience of building end-to-end solution as well as implementing or operationalizing the same solution
  • Must have done EDA extensively for a minimum of 3 years
  • Expertise in building statistical and machine learning algorithms for:
                - Regression
                - Time series
                - Ensemble learning
                - Bayesian stats
                - Classification
                - Clustering
                - NLP
                - Anomaly Detection
                - Hands-on experience in Bayesian Stats would be preferred
                - Exposure to optimization and simulation techniques (good to                      have)

SKILLS:

  • Proven skills in translating statistics into insights. Sound knowledge in statistical inference and hypothesis testing
  • English (Fluent)
  • Microsoft Office (mandatory)
  • Expert in Python (mandatory)
  • Advanced Excel (mandatory)
  • SQL (at least one database)
  • Reporting Tool (Any Tool – Must be open to learning tools based on the requirement)


EDUCATION:

  • B Tech (or equivalent) in any branch or degree in Statistics, Applied Statistics, Economics, Econometrics, Operations Research or any other quantitative fields


Mandatory Requirements:

  • Strong written and verbal English skills
  • Strong analytical, logic and quantitative ability
  • Desire to solve business problems in any domain through analytical/d


Here’s what our current team is enjoying through their work with us:

  • Competitive salary
  • Permanent Work from Home Opportunity
  • Work environment that offers limitless learning
  • A culture void of any bureaucracy, hierarchy
  • A culture of being open, direct and mutual respect
  • A fun, high-calibre team that trusts you and provides the support and mentorship to help you grow
  • The opportunity to work on high-impact business problems that are already defining the future of trade finance and improving real lives 

Read more
Job posted by
Anila Nair
Apply to job
Founded 2016  •  Products & Services  •  20-100 employees  •  Raised funding
Python
AWS Lambda
SQL
Data Structures
RESTful APIs
Amazon Web Services (AWS)
Remote only
2 - 8 yrs
₹15L - ₹20L / yr

Propellor is a B2B SaaS Platform aimed at bringing Marketing Analytics and other Business Workflows ecosystem to the Cloud ecosystem. Propellor provides access to a powerful and fast computing on AWS that helps Marketers and Business Decision makers view and take decisions on a real-time basis. The product roadmap is evolving and there are many more functionalities and features which will be added to the product.  We are growing at a fantastic rate. This is an incredible opportunity for someone talented and ambitious to make a huge impact.

Role:

  • Develop & maintain APIs, libraries and frameworks
  • Improve the performance and reliability of our services including databases, CI/CD pipeline, web services, and other integrations
  • Monitor and scale our platform and cloud infrastructure to suit client needs
  • Collaborate with other teams on security, automation, and internal tools
  • Evaluate and develop new tools and technologies that can help achieve company-level goals


Required Candidate profile:

  • 3 -8 years of experience in Platform/Product Engineering & Development.
  • Should have good knowledge of backend Technologies in Python 3.6+
  • Should have worked with Databases like Mysql, monogdb etc.
  • Should have working experience in cloud environment.
  • Experience in developing RESTful endpoints or other SOA endpoints
  • Understands basic algorithmic techniques, design patterns and best practices
  • Curious about how things work and the behavior of finding the answers
  • Experience with development of scalable and distributed Python services
  • Expertise on AWS cloud and working on Lambda functions is preferred
  • Experience working in an event-driven environment is preferable
  • Good problem-solving skills

Who will you work with?

  • In this role, you’ll work directly with the CTO, Product Manager and Developers.


What’s in it for you to join us?

Here’s what our current team is enjoying through their work with us

  • Competitive salary
  • Permanent Work from Home Opportunity
  • Work environment that offers limitless learning
  • A culture void of any bureaucracy, hierarchy
  • A culture of being open, direct and mutual respect
  • A fun, high-calibre team that trusts you and provides the support and mentorship to help you grow
  • The opportunity to work on high-impact business problems that are already defining the future of trade finance and improving real lives

Best of luck!

Read more
Job posted by
Anila Nair
Apply to job