Salary
8.5 - 9.5 M
Job Type
Permanent
Job Function
Information Technology
Years of Experience
3+ years
Location
Tokyo Japan

Job Title: Data Engineer (Permanent Position)

Company Info: Multinational Insurance Company

Work Location: Tokyo

 

Language skills:

Japanese: Business level

English: Business level

 

Looking for innovative and adaptable data expert with a strong desire to succeed!

If you are passionate about data with an emphasis on quality programming and building the best solution possible, here is the opportunity!

 

Data engineers working there carry out a wide variety of business intelligence tasks in a largely AWS based cloud computing environment.

 

Responsibilities:

  • Building high quality and sustainable data pipelines and ETL processes to extract data from a variety of APIs and ingest into cloud-based services.
  • Efficiently developing complex SQL queries to aggregate and transform data for analytics team and general users.
  • Maintaining accurate and error-free data bases and datalake structures
  • Conducting quality assessment and integrity checks on both new and existing queries and processes.
  • Monitoring existing solutions and working pro-actively to rapidly resolve errors and identify future problems before they occur.
  • Using data visualization tools such as Power BI, SSRS, Tableau, Looker etc to develop high quality dashboards and reports.
  • Consulting with a variety of stakeholders to gather new project requirements and transform these into well-defined tasks and targets.

 

You’ll have demonstrated experience working in a high performing business intelligence or data warehouse environment, excellent communication skills and a passion for problem solving and learning new technologies.

You’ll be exposed to a large variety of tasks, tools and programming languages so the desire and ability to constantly learn new skills is essential.

 

Skills Required:

  • 3-5 years of practical experience in data / analytics with at least 1 year working in an engineering / B.I role.
  • At least 1-year practical experience working on data pipelines or analytics projects with languages such as Python, Scala or Node.JS
  • At least 2 years practical experience working on data pipelines or analytics projects with SQL / NoSql databases (ideally in a Hadoop based environment).
  • Strong knowledge and practical experience working with at least four of the following AWS services: (s3, EMR, ECS/EC2, Lambda, Glue, Athena, Kinises/Spark Streaming, Step Functions, Cloudwatch, Dynamo DB).
  • Strong Experience working with data processing and ETL systems such as Oozie, Airflow, Azkaban, Luigi, SSIS.
  • Experience developing solutions inside a Hadoop stack using tools such as (Hive, Spark, Storm, Kafka, Ambari, Hue etc).
  • Ability to work with large volumes of both raw and processed data in a variety of formats including (JSON, ORC, Parquet, CSV).
  • Ability to work in a Linux /Unix environment (predominately via EMR & AWS CLi / Hadoop File System).
  • Experience with DevOps solutions such as (Jenkins, GitHub, Ansible, Docker, Kubernetes).
  • Minimum undergraduate level qualifications in a technical discipline such as (Computer Science, Data Science, Analytics, Machine Learning, Statistics etc). Post Graduate qualifications preferred.
  • Demonstrated experience and expertise on setting up and maintaining cloud data solutions and AWS infrastructure will be highly regarded.
  • Strong knowledge of cloud-based data security, encryption and protection methods will also be highly regarded.