－ Design, build, and maintain distributed batch and real－time data pipelines and data models.
－ Facilitate real－life actionable use cases leveraging our data with a user－ and product－oriented mindset.
－ Be curious and eager to work across a variety of engineering specialties （i.e., Data Science, and Machine Learning to name a few）.
－ Support teams without data engineers with building decentralized data solutions and product integrations, for example around DynamoDB.
Enforce privacy and security standards by design.
－ Conceptualize, design and implement improvements to ETL processes and data through independent communication with data－savvy stakeholders.
－ Work Hours:
Super Flex Time （No Core Time）
In principle, 10:00am－6:45pm （actual working hours: 7h45m ＋ 1h break）
Every Sat/Sun/National holidays （In Japan）/New Year's break/Company－designated Special days
－ Paid leave
Annual leave （up to 14 days in the first year, granted proportionally according to the month of employment. Can be used from the date of hire）
Personal leave （5 days each year, granted proportionally according to the month of employment）
Annual salary paid in 12 installments （monthly）
Based on skills, experience, and abilities
Reviewed once a year
Special Incentive once a year ＊Based on company performance and individual contribution and evaluation
Late overtime allowance, Work from anywhere allowance （JPY100,000）
Social Insurance （health insurance, employee pension, employment insurance and compensation insurance）
Language Learning support
VISA sponsor ＋ Relocation support
【必須（MUST）】－ ＋3 years experience building complex data pipelines and working with both technical and business stakeholders.
－ Experience in at least one primary language （e.g., Java, Scala, Python） and SQL （any variant）.
－ Experience with technologies like BigQuery, Spark, AWS Redshift, Kafka, or Kinesis streaming.
－ Experience creating and maintaining ETL processes.
－ Experience designing, building, and operating a DataLake or Data Warehouse.
－Experience with DBMS and SQL tuning.
－Strong fundamentals in big data and machine learning.
【歓迎（WANT）】Experience with RESTful APIs, Pub/Sub Systems, or Database Clients.
Experience with analytics and defining metrics.
Experience with measuring data quality.
Experience productionalizing a machine learning workflow; MLOps
Experience in one or more machine learning frameworks, including but not limited to scikit－learn, Tensorflow, PyTorch and H2O.
Language ability in Japanese and English is a plus （We have a professional translator but it is nice to have language skills）.
Experience with AWS services.
Experience with microservices.
Knowledge of Data Security and Privacy.