Data is the life blood of Cogo Labs and impacts nearly all decision making within the company. Cogo Lab's data team serves the important responsibility of maintaining, planning, and scaling these vital systems. As a curator of these systems you will ingest and process billions of events each day and have a direct impact on Cogo's bottom line.
As our Data Engineer you will:
- Build and maintain robust, fault-tolerant ETL pipelines for Cogo's internal and external data resources.
- Work with incubatees and teams to plan their data pipeline, optimize their SQL, and act as a sounding board for all data questions.
- Maintain, plan, and scale Cogo's MySQL and PostgreSQL relational database infrastructure.
- Respond to occasional off-hours outages.
Skills and requirements:
- Familiarity with five of the following: Spark, MySQL, PostgreSQL, EMR, S3, Redshift, Airflow, Presto, Hive, Kafka, RabbitMQ, Map Reduce.
- Proficiency with Python and one of the following languages: Java, Scala, Golang
- Comfort and experience with the Linux command line and Git.
- Knowledge of relational database best-practices like replication, high availability, and performance tuning.
- Strong problem-solving/analytical aptitude.
- Ability to self-manage tasks and context-switch as necessary.
- The willingness and ability to learn.
Nice to haves:
- Experience or familiarity with LVM, md, ext4, or ZFS
- Exposure to Docker and/or containers