This is a unique opportunity to intern at an early stage Y Combinator backed startup. The candidate will be working side-by-side with the CTO to design and build the early major releases of the company’s products and help set the technical direction and culture of the company.
About the company
HiGeorge helps companies, no matter how big or small, better leverage the world’s data to create business value. We do this by providing a no-code service where businesses can access the world’s public data and visualize it. Think Tableau with all the world’s public data already attached.
Today, HiGeorge enables media companies like the Chicago Tribune to easily create best in class data visualizations for their readers at a fraction of the cost and time of an in-house team. We can do this by leveraging our proprietary data pipeline engine and front-end libraries that allows us to configure new auto-updating data feeds without writing new code.
HiGeorge launched 1 year ago and has grown 4x since December. We are backed by a mix of Silicon Valley and media institutions such as YCombinator, Bertelsmann Digital Media Investments (BDMI) and Garage Technology Ventures. Our ambition is to make data accessible to everyone and build the next multi-billion dollar tech company along the way.
We are seeking a motivated data scientist to join the engineering team at Dataherald in Los Angeles, or join remotely from anywhere around the world. You’ll be bringing your skills and expertise to design schemas, build visualizations, and develop and deploy ETL pipelines that will make external data accessible to many businesses.
In this role you will
As an intern at HiGeorge, you will be involved throughout the product lifecycle – from idea generation, design, and prototyping to execution and shipping. You’ll collaborate closely with the product managers and other developers to deliver data-backed experiences to our users. You will be involved in high-level product and technical decision-making, as well as diving deep into the weeds to unblock those around you and write thoughtful, maintainable code. You will be creating new data visualizations, improving the data pipeline, and working with other members in reviewing and designing code and practices.
- Independently design, build and launch new ETL pipelines in production
- Collaborate on improving the company’s data pipeline engine
- Design and build data integrity and quality controls and processes
- Enrolled in bachelor’s degree in Computer Science or Data Science, or equivalent understanding of algorithms, performance, and systems design.
- Previous work with large datasets
- Experience with SQL
What We Value
- Great communication skills and proven ability to work as part of a tight-knit team.
- Ability to learn new technologies