About the company
Our mission is to bring blockchain to a billion people. The Alchemy Platform is a world class developer platform designed to make building on the blockchain easy. We've built leading infrastructure in the space, powering over $105 billion in transactions for tens of millions of users in 99% of countries worldwide. The Alchemy team draws from decades of deep expertise in massively scalable infrastructure, AI, and blockchain from leadership roles at leading companies and universities like Google, Microsoft, Facebook, Stanford, and MIT. Alchemy recently raised a Series C1 at a $10.2B valuation led by Lightspeed and Silver Lake. Previously, Alchemy raised from a16z, Coatue, Addition, Stanford University, Coinbase, the Chairman of Google, Charles Schwab, and the founders and executives of leading organizations. Alchemy powers the top blockchain companies globally and has been featured in TechCrunch, Forbes, Bloomberg, and elsewhere.
Job Summary
What You'll Do:
📍Design and Implementation: Architect and develop data infrastructure solutions leveraging Snowflake’s capabilities to meet business needs. 📍Data Integration: Manage and optimize data pipelines, ensuring seamless integration from diverse data sources. 📍Performance Tuning: 📍Conduct performance optimization for Snowflake environments, including storage, compute, and query tuning. 📍Security and Compliance: 📍Implement best practices for data security, privacy, and governance in alignment with organizational policies and industry standards. 📍Implement best practices to meet SLA requirements of Business Continuity and Disaster Recovery 📍Collaboration: 📍Partner with data analysts, scientists, and business stakeholders to understand requirements and deliver solutions and data insights that drive impact. 📍Build production DAG workflows for batch data processing and storage 📍Monitoring and Maintenance: 📍Establish monitoring systems for reliability and proactively address issues to ensure system uptime and data in; 📍Set up frameworks and tools to help team members create and debug pipelines by themselves
What We're Looking For:
📍BS degree in Computer Science or similar technical discipline, MS/PhD a plus. 📍6+ years experience in a software engineering discipline, with at least 4+ years experience in data engineering or data infrastructure 📍Experience with Airflow, Temporal, or other workflow orchestration tools 📍Experience in streaming data architectures using Kafka and Flink is a plus 📍Experience with Snowflake / Spark / Trino or other query engines 📍Experience with data modeling frameworks such as DBT and SQLMesh 📍Experience working with Apache Iceberg or other data lake formats. 📍Familiar with datalake ingestion patterns