What We Do
Blue Bean Software is a premier custom software and product development IT company, delivering custom-made solutions for large enterprises as well as dynamic start-ups.
We pride ourselves in taking on and solving complex problems as well as high-stakes projects through the use of a balanced combination of tech savvy and a deep understanding of a client’s needs.
We have a prominent presence in the financial services industry and have started to branch out into other industries, such as agritech and healthtech.
Who We Are
At Blue Bean Software, we believe in creating an environment where like-minded software engineers are able to express themselves freely and pursue their individual and professional growth. We further encourage individuals to master their respective skill sets while effectively working within teams to overcome challenges and accomplish set goals.
At Blue Bean Software, we firmly believe in maintaining a culture of self-motivation, integrity, and trust to drive productivity.
How We Work
We have a flat organisational structure and value collaboration between our teams. We further believe in empowering individual team members to ensure agile decision-making and streamlined communication across all teams to deliver efficient and effective customer service at all times.
Role Overview
- Building and maintaining scalable ETL/ELT pipelines for high-volume, time-sensitive data
- Engineering data solutions on AWS using tools like Glue, S3, Redshift, Lake Formation
- Optimising big data workflows using Apache Spark, Kafka, and Python
- Contributing to Lakehouse/Data Mesh architecture and cloud migration efforts
- Automating infrastructure with Terraform and deploying via CI/CD pipelines
- Ensuring robust data security and collaborating across Agile teams
Your Skills and Experience:
- 5+ years of professional experience in data engineering or related fields.
- Strong experience building ETL pipelines in cloud environments (preferably AWS).
- Proficiency in Python for scripting, data manipulation, and automation.
- Experience with Apache Spark and knowledge of the broader big data ecosystem.
- Hands-on experience with streaming technologies such as Kafka or Kinesis.
- Working knowledge of AWS services like S3, Glue, Lake Formation, Athena, IAM.
- Familiarity with CI/CD tools (e.g. GitHub Actions, Azure DevOps) and version control (Git).
- Experience with Terraform or other infrastructure-as-code frameworks.
- Exposure to Lakehouse/Data Mesh architectures.
- Understanding of security protocols including encryption, OAuth, SAML, and identity providers (AD/LDAP/Kerberos).
- Exposure to containerisation and orchestration tools like Docker and Kubernetes is advantageous.
- Familiarity with both relational and NoSQL databases.
Additional Information
Advantageous:
- Experience with dbt (Data Build Tool).
- Exposure to Snowflake or similar cloud-native data warehouse platforms.
- Relevant certifications such as AWS Certified Data Analytics or Azure Data Engineer Associate.
- Experience with monitoring and observability tools for data pipelines.
Competencies:
- Strong analytical thinking and attention to detail.
- Comfortable working in high-pressure environments with time-sensitive data.
- Excellent problem-solving and debugging abilities.
- Team-oriented with strong communication skills.
- A proactive, solutions-driven mindset.
- Embraces change and thrives in Agile environments.
Work model: In-office