Player FM - Internet Radio Done Right
68 subscribers
Checked 9d ago
Added two years ago
İçerik Tobias Macey tarafından sağlanmıştır. Bölümler, grafikler ve podcast açıklamaları dahil tüm podcast içeriği doğrudan Tobias Macey veya podcast platform ortağı tarafından yüklenir ve sağlanır. Birinin telif hakkıyla korunan çalışmanızı izniniz olmadan kullandığını düşünüyorsanız burada https://tr.player.fm/legal özetlenen süreci takip edebilirsiniz.
Player FM - Podcast Uygulaması
Player FM uygulamasıyla çevrimdışı Player FM !
Player FM uygulamasıyla çevrimdışı Player FM !
Dinlemeye Değer Podcast'ler
SPONSOR
O
On the Bus with Troy Vollhoffer


1 From Backroom Bars to Broadway with Dustin Lynch 36:27
36:27
Daha Sonra Çal
Daha Sonra Çal
Listeler
Beğen
Beğenildi36:27
Not many artists actually hail from Tennessee, but the scenic valleys and rolling hills of The Volunteer State are part of Dustin Lynch’s DNA. In this episode of On the Bus, Country Thunder CEO Troy Vollhoffer sits down with Dustin to discuss his journey from playing fraternity parties and weddings across the southeast to being the first country artist with a club residency at the Wynn in Las Vegas. Plus, stick around for our new segment, Thunder Strike, where Troy features upcoming festival performer Riley Green’s hit song “Damn Good Day to Leave” to give you a taste of what’s to come at Country Thunder in 2025.…
An Opinionated Look At End-to-end Code Only Analytical Workflows With Bruin
Manage episode 449465582 series 3449056
İçerik Tobias Macey tarafından sağlanmıştır. Bölümler, grafikler ve podcast açıklamaları dahil tüm podcast içeriği doğrudan Tobias Macey veya podcast platform ortağı tarafından yüklenir ve sağlanır. Birinin telif hakkıyla korunan çalışmanızı izniniz olmadan kullandığını düşünüyorsanız burada https://tr.player.fm/legal özetlenen süreci takip edebilirsiniz.
Summary
The challenges of integrating all of the tools in the modern data stack has led to a new generation of tools that focus on a fully integrated workflow. At the same time, there have been many approaches to how much of the workflow is driven by code vs. not. Burak Karakan is of the opinion that a fully integrated workflow that is driven entirely by code offers a beneficial and productive means of generating useful analytical outcomes. In this episode he shares how Bruin builds on those opinions and how you can use it to build your own analytics without having to cobble together a suite of tools with conflicting abstractions.
Announcements
Parting Question
The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA
…
continue reading
The challenges of integrating all of the tools in the modern data stack has led to a new generation of tools that focus on a fully integrated workflow. At the same time, there have been many approaches to how much of the workflow is driven by code vs. not. Burak Karakan is of the opinion that a fully integrated workflow that is driven entirely by code offers a beneficial and productive means of generating useful analytical outcomes. In this episode he shares how Bruin builds on those opinions and how you can use it to build your own analytics without having to cobble together a suite of tools with conflicting abstractions.
Announcements
- Hello and welcome to the Data Engineering Podcast, the show about modern data management
- Imagine catching data issues before they snowball into bigger problems. That’s what Datafold’s new Monitors do. With automatic monitoring for cross-database data diffs, schema changes, key metrics, and custom data tests, you can catch discrepancies and anomalies in real time, right at the source. Whether it’s maintaining data integrity or preventing costly mistakes, Datafold Monitors give you the visibility and control you need to keep your entire data stack running smoothly. Want to stop issues before they hit production? Learn more at dataengineeringpodcast.com/datafold today!
- Your host is Tobias Macey and today I'm interviewing Burak Karakan about the benefits of building code-only data systems
- Introduction
- How did you get involved in the area of data management?
- Can you describe what Bruin is and the story behind it?
- Who is your target audience?
- There are numerous tools that address the ETL workflow for analytical data. What are the pain points that you are focused on for your target users?
- How does a code-only approach to data pipelines help in addressing the pain points of analytical workflows?
- How might it act as a limiting factor for organizational involvement?
- Can you describe how Bruin is designed?
- How have the design and scope of Bruin evolved since you first started working on it?
- You call out the ability to mix SQL and Python for transformation pipelines. What are the components that allow for that functionality?
- What are some of the ways that the combination of Python and SQL improves ergonomics of transformation workflows?
- What are the key features of Bruin that help to streamline the efforts of organizations building analytical systems?
- Can you describe the workflow of someone going from source data to warehouse and dashboard using Bruin and Ingestr?
- What are the opportunities for contributions to Bruin and Ingestr to expand their capabilities?
- What are the most interesting, innovative, or unexpected ways that you have seen Bruin and Ingestr used?
- What are the most interesting, unexpected, or challenging lessons that you have learned while working on Bruin?
- When is Bruin the wrong choice?
- What do you have planned for the future of Bruin?
Parting Question
- From your perspective, what is the biggest gap in the tooling or technology for data management today?
- Thank you for listening! Don't forget to check out our other shows. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used. The AI Engineering Podcast is your guide to the fast-moving world of building AI systems.
- Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
- If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com with your story.
The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA
460 bölüm
Manage episode 449465582 series 3449056
İçerik Tobias Macey tarafından sağlanmıştır. Bölümler, grafikler ve podcast açıklamaları dahil tüm podcast içeriği doğrudan Tobias Macey veya podcast platform ortağı tarafından yüklenir ve sağlanır. Birinin telif hakkıyla korunan çalışmanızı izniniz olmadan kullandığını düşünüyorsanız burada https://tr.player.fm/legal özetlenen süreci takip edebilirsiniz.
Summary
The challenges of integrating all of the tools in the modern data stack has led to a new generation of tools that focus on a fully integrated workflow. At the same time, there have been many approaches to how much of the workflow is driven by code vs. not. Burak Karakan is of the opinion that a fully integrated workflow that is driven entirely by code offers a beneficial and productive means of generating useful analytical outcomes. In this episode he shares how Bruin builds on those opinions and how you can use it to build your own analytics without having to cobble together a suite of tools with conflicting abstractions.
Announcements
Parting Question
The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA
…
continue reading
The challenges of integrating all of the tools in the modern data stack has led to a new generation of tools that focus on a fully integrated workflow. At the same time, there have been many approaches to how much of the workflow is driven by code vs. not. Burak Karakan is of the opinion that a fully integrated workflow that is driven entirely by code offers a beneficial and productive means of generating useful analytical outcomes. In this episode he shares how Bruin builds on those opinions and how you can use it to build your own analytics without having to cobble together a suite of tools with conflicting abstractions.
Announcements
- Hello and welcome to the Data Engineering Podcast, the show about modern data management
- Imagine catching data issues before they snowball into bigger problems. That’s what Datafold’s new Monitors do. With automatic monitoring for cross-database data diffs, schema changes, key metrics, and custom data tests, you can catch discrepancies and anomalies in real time, right at the source. Whether it’s maintaining data integrity or preventing costly mistakes, Datafold Monitors give you the visibility and control you need to keep your entire data stack running smoothly. Want to stop issues before they hit production? Learn more at dataengineeringpodcast.com/datafold today!
- Your host is Tobias Macey and today I'm interviewing Burak Karakan about the benefits of building code-only data systems
- Introduction
- How did you get involved in the area of data management?
- Can you describe what Bruin is and the story behind it?
- Who is your target audience?
- There are numerous tools that address the ETL workflow for analytical data. What are the pain points that you are focused on for your target users?
- How does a code-only approach to data pipelines help in addressing the pain points of analytical workflows?
- How might it act as a limiting factor for organizational involvement?
- Can you describe how Bruin is designed?
- How have the design and scope of Bruin evolved since you first started working on it?
- You call out the ability to mix SQL and Python for transformation pipelines. What are the components that allow for that functionality?
- What are some of the ways that the combination of Python and SQL improves ergonomics of transformation workflows?
- What are the key features of Bruin that help to streamline the efforts of organizations building analytical systems?
- Can you describe the workflow of someone going from source data to warehouse and dashboard using Bruin and Ingestr?
- What are the opportunities for contributions to Bruin and Ingestr to expand their capabilities?
- What are the most interesting, innovative, or unexpected ways that you have seen Bruin and Ingestr used?
- What are the most interesting, unexpected, or challenging lessons that you have learned while working on Bruin?
- When is Bruin the wrong choice?
- What do you have planned for the future of Bruin?
Parting Question
- From your perspective, what is the biggest gap in the tooling or technology for data management today?
- Thank you for listening! Don't forget to check out our other shows. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used. The AI Engineering Podcast is your guide to the fast-moving world of building AI systems.
- Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
- If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com with your story.
The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA
460 bölüm
Tüm bölümler
×D
Data Engineering Podcast

1 Overcoming Redis Limitations: The Dragonfly DB Approach 43:58
43:58
Daha Sonra Çal
Daha Sonra Çal
Listeler
Beğen
Beğenildi43:58
Summary In this episode of the Data Engineering Podcast Roman Gershman, CTO and founder of Dragonfly DB, explores the development and impact of high-speed in-memory databases. Roman shares his experience creating a more efficient alternative to Redis, focusing on performance gains, scalability, and cost efficiency, while addressing limitations such as high throughput and low latency scenarios. He explains how Dragonfly DB solves operational complexities for users and delves into its technical aspects, including maintaining compatibility with Redis while innovating on memory efficiency. Roman discusses the importance of cost efficiency and operational simplicity in driving adoption and shares insights on the broader ecosystem of in-memory data stores, future directions like SSD tiering and vector search capabilities, and the lessons learned from building a new database engine. Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data management Data migrations are brutal. They drag on for months—sometimes years—burning through resources and crushing team morale. Datafold's AI-powered Migration Agent changes all that. Their unique combination of AI code translation and automated data validation has helped companies complete migrations up to 10 times faster than manual approaches. And they're so confident in their solution, they'll actually guarantee your timeline in writing. Ready to turn your year-long migration into weeks? Visit dataengineeringpodcast.com/datafold today for the details. Your host is Tobias Macey and today I'm interviewing Roman Gershman about building a high-speed in-memory database and the impact of the performance gains on data applications Interview Introduction How did you get involved in the area of data management? Can you describe what DragonflyDB is and the story behind it? What is the core problem/use case that is solved by making a "faster Redis"? The other major player in the high performance key/value database space is Aerospike. What are the heuristics that an engineer should use to determine whether to use that vs. Dragonfly/Redis? Common use cases for Redis involve application caches and queueing (e.g. Celery/RQ). What are some of the other applications that you have seen Redis/Dragonfly used for, particularly in data engineering use cases? There is a piece of tribal wisdom that it takes 10 years for a database to iron out all of the kinks. At the same time, there have been substantial investments in commoditizing the underlying components of database engines. Can you describe how you approached the implementation of DragonflyDB to arive at a functional and reliable implementation? What are the architectural elements that contribute to the performance and scalability benefits of Dragonfly? How have the design and goals of the system changed since you first started working on it? For teams who migrate from Redis to Dragonfly, beyond the cost savings what are some of the ways that it changes the ways that they think about their overall system design? What are the most interesting, innovative, or unexpected ways that you have seen Dragonfly used? What are the most interesting, unexpected, or challenging lessons that you have learned while working on DragonflyDB? When is DragonflyDB the wrong choice? What do you have planned for the future of DragonflyDB? Contact Info GitHub LinkedIn Parting Question From your perspective, what is the biggest gap in the tooling or technology for data management today? Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used. The AI Engineering Podcast is your guide to the fast-moving world of building AI systems. Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com with your story. Links DragonflyDB Redis Elasticache ValKey Aerospike Laravel Sidekiq Celery Seastar Framework Shared-Nothing Architecture io_uring midi-redis Dunning-Kruger Effect Rust The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA…
D
Data Engineering Podcast

1 Bringing AI Into The Inner Loop of Data Engineering With Ascend 52:47
52:47
Daha Sonra Çal
Daha Sonra Çal
Listeler
Beğen
Beğenildi52:47
Summary In this episode of the Data Engineering Podcast Sean Knapp, CEO of Ascend.io, explores the intersection of AI and data engineering. He discusses the evolution of data engineering and the role of AI in automating processes, alleviating burdens on data engineers, and enabling them to focus on complex tasks and innovation. The conversation covers the challenges and opportunities presented by AI, including the need for intelligent tooling and its potential to streamline data engineering processes. Sean and Tobias also delve into the impact of generative AI on data engineering, highlighting its ability to accelerate development, improve governance, and enhance productivity, while also noting the current limitations and future potential of AI in the field. Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data management Data migrations are brutal. They drag on for months—sometimes years—burning through resources and crushing team morale. Datafold's AI-powered Migration Agent changes all that. Their unique combination of AI code translation and automated data validation has helped companies complete migrations up to 10 times faster than manual approaches. And they're so confident in their solution, they'll actually guarantee your timeline in writing. Ready to turn your year-long migration into weeks? Visit dataengineeringpodcast.com/datafold today for the details. Your host is Tobias Macey and today I'm interviewing Sean Knapp about how Ascend is incorporating AI into their platform to help you keep up with the rapid rate of change Interview Introduction How did you get involved in the area of data management? Can you describe what Ascend is and the story behind it? The last time we spoke was August of 2022 . What are the most notable or interesting evolutions in your platform since then? In that same time "AI" has taken up all of the oxygen in the data ecosystem. How has that impacted the ways that you and your customers think about their priorities? The introduction of AI as an API has caused many organizations to try and leap-frog their data maturity journey and jump straight to building with advanced capabilities. How is that impacting the pressures and priorities felt by data teams? At the same time that AI-focused product goals are straining data teams capacities, AI also has the potential to act as an accelerator to their work. What are the roadblocks/speedbumps that are in the way of that capability? Many data teams are incorporating AI tools into parts of their workflow, but it can be clunky and cumbersome. How are you thinking about the fundamental changes in how your platform works with AI at its center? Can you describe the technical architecture that you have evolved toward that allows for AI to drive the experience rather than being a bolt-on? What are the concrete impacts that these new capabilities have on teams who are using Ascend? What are the most interesting, innovative, or unexpected ways that you have seen Ascend + AI used? What are the most interesting, unexpected, or challenging lessons that you have learned while working on incorporating AI into the core of Ascend? When is Ascend the wrong choice? What do you have planned for the future of AI in Ascend? Contact Info LinkedIn Parting Question From your perspective, what is the biggest gap in the tooling or technology for data management today? Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used. The AI Engineering Podcast is your guide to the fast-moving world of building AI systems. Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com with your story. Links Ascend Cursor AI Code Editor Devin GitHub Copilot OpenAI DeepResearch S3 Tables AWS Glue AWS Bedrock Snowpark Co-Intelligence : Living and Working with AI by Ethan Mollick (affiliate link) OpenAI o3 The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA…
D
Data Engineering Podcast

1 Astronomer's Role in the Airflow Ecosystem: A Deep Dive with Pete DeJoy 51:41
51:41
Daha Sonra Çal
Daha Sonra Çal
Listeler
Beğen
Beğenildi51:41
Summary In this episode of the Data Engineering Podcast Pete DeJoy, co-founder and product lead at Astronomer, talks about building and managing Airflow pipelines on Astronomer and the upcoming improvements in Airflow 3. Pete shares his journey into data engineering, discusses Astronomer's contributions to the Airflow project, and highlights the critical role of Airflow in powering operational data products. He covers the evolution of Airflow, its position in the data ecosystem, and the challenges faced by data engineers, including infrastructure management and observability. The conversation also touches on the upcoming Airflow 3 release, which introduces data awareness, architectural improvements, and multi-language support, and Astronomer's observability suite, Astro Observe, which provides insights and proactive recommendations for Airflow users. Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data management Data migrations are brutal. They drag on for months—sometimes years—burning through resources and crushing team morale. Datafold's AI-powered Migration Agent changes all that. Their unique combination of AI code translation and automated data validation has helped companies complete migrations up to 10 times faster than manual approaches. And they're so confident in their solution, they'll actually guarantee your timeline in writing. Ready to turn your year-long migration into weeks? Visit dataengineeringpodcast.com/datafold today for the details. Your host is Tobias Macey and today I'm interviewing Pete DeJoy about building and managing Airflow pipelines on Astronomer and the upcoming improvements in Airflow 3 Interview Introduction Can you describe what Astronomer is and the story behind it? How would you characterize the relationship between Airflow and Astronomer? Astronomer just released your State of Airflow 2025 Report yesterday and it is the largest data engineering survey ever with over 5,000 respondents. Can you talk a bit about top level findings in the report? What about the overall growth of the Airflow project over time? How have the focus and features of Astronomer changed since it was last featured on the show in 2017? Astro Observe GA’d in early February, what does the addition of pipeline observability mean for your customers? What are other capabilities similar in scope to observability that Astronomer is looking at adding to the platform? Why is Airflow so critical in providing an elevated Observability–or cataloging, or something simlar - experience in a DataOps platform? What are the notable evolutions in the Airflow project and ecosystem in that time? What are the core improvements that are planned for Airflow 3.0? What are the most interesting, innovative, or unexpected ways that you have seen Astro used? What are the most interesting, unexpected, or challenging lessons that you have learned while working on Airflow and Astro? What do you have planned for the future of Astro/Astronomer/Airflow? Contact Info LinkedIn Parting Question From your perspective, what is the biggest gap in the tooling or technology for data management today? Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used. The AI Engineering Podcast is your guide to the fast-moving world of building AI systems. Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com with your story. Links Astronomer Airflow Maxime Beauchemin MongoDB Databricks Confluent Spark Kafka Dagster Podcast Episode Prefect Airflow 3 The Rise of the Data Engineer blog post dbt Jupyter Notebook Zapier cosmos library for dbt in Airflow Ruff Airflow Custom Operator Snowflake The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA…
D
Data Engineering Podcast

1 Accelerated Computing in Modern Data Centers With Datapelago 55:36
55:36
Daha Sonra Çal
Daha Sonra Çal
Listeler
Beğen
Beğenildi55:36
Summary In this episode of the Data Engineering Podcast Rajan Goyal, CEO and co-founder of Datapelago, talks about improving efficiencies in data processing by reimagining system architecture. Rajan explains the shift from hyperconverged to disaggregated and composable infrastructure, highlighting the importance of accelerated computing in modern data centers. He discusses the evolution from proprietary to open, composable stacks, emphasizing the role of open table formats and the need for a universal data processing engine, and outlines Datapelago's strategy to leverage existing frameworks like Spark and Trino while providing accelerated computing benefits. Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data management Data migrations are brutal. They drag on for months—sometimes years—burning through resources and crushing team morale. Datafold's AI-powered Migration Agent changes all that. Their unique combination of AI code translation and automated data validation has helped companies complete migrations up to 10 times faster than manual approaches. And they're so confident in their solution, they'll actually guarantee your timeline in writing. Ready to turn your year-long migration into weeks? Visit dataengineeringpodcast.com/datafold today for the details. Your host is Tobias Macey and today I'm interviewing Rajan Goyal about how to drastically improve efficiencies in data processing by re-imagining the system architecture Interview Introduction How did you get involved in the area of data management? Can you start by outlining the main factors that contribute to performance challenges in data lake environments? The different components of open data processing systems have evolved from different starting points with different objectives. In your experience, how has that un-planned and un-synchronized evolution of the ecosystem hindered the capabilities and adoption of open technologies? The introduction of a new cross-cutting capability (e.g. Iceberg) has typically taken a substantial amount of time to gain support across different engines and ecosystems. What do you see as the point of highest leverage to improve the capabilities of the entire stack with the least amount of co-ordination? What was the motivating insight that led you to invest in the technology that powers Datapelago? Can you describe the system design of Datapelago and how it integrates with existing data engines? The growth in the generation and application of unstructured data is a notable shift in the work being done by data teams. What are the areas of overlap in the fundamental nature of data (whether structured, semi-structured, or unstructured) that you are able to exploit to bridge the processing gap? What are the most interesting, innovative, or unexpected ways that you have seen Datapelago used? What are the most interesting, unexpected, or challenging lessons that you have learned while working on Datapelago? When is Datapelago the wrong choice? What do you have planned for the future of Datapelago? Contact Info LinkedIn Parting Question From your perspective, what is the biggest gap in the tooling or technology for data management today? Links Datapelago MIPS Architecture ARM Architecture AWS Nitro Mellanox Nvidia Von Neumann Architecture TPU == Tensor Processing Unit FPGA == Field-Programmable Gate Array Spark Trino Iceberg Podcast Episode Delta Lake Podcast Episode Hudi Podcast Episode Apache Gluten Intermediate Representation Turing Completeness LLVM Amdahl's Law LSTM == Long Short-Term Memory The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA…
D
Data Engineering Podcast

1 The Future of Data Engineering: AI, LLMs, and Automation 59:39
59:39
Daha Sonra Çal
Daha Sonra Çal
Listeler
Beğen
Beğenildi59:39
Summary In this episode of the Data Engineering Podcast Gleb Mezhanskiy, CEO and co-founder of DataFold, talks about the intersection of AI and data engineering. He discusses the challenges and opportunities of integrating AI into data engineering, particularly using large language models (LLMs) to enhance productivity and reduce manual toil. The conversation covers the potential of AI to transform data engineering tasks, such as text-to-SQL interfaces and creating semantic graphs to improve data accessibility, and explores practical applications of LLMs in automating code reviews, testing, and understanding data lineage. Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data management Data migrations are brutal. They drag on for months—sometimes years—burning through resources and crushing team morale. Datafold's AI-powered Migration Agent changes all that. Their unique combination of AI code translation and automated data validation has helped companies complete migrations up to 10 times faster than manual approaches. And they're so confident in their solution, they'll actually guarantee your timeline in writing. Ready to turn your year-long migration into weeks? Visit dataengineeringpodcast.com/datafold today for the details. Your host is Tobias Macey and today I'm interviewing Gleb Mezhanskiy about Interview Introduction How did you get involved in the area of data management? modern data stack is dead where is AI in the data stack? "buy our tool to ship AI" opportunities for LLM in DE workflow Contact Info LinkedIn Parting Question From your perspective, what is the biggest gap in the tooling or technology for data management today? Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used. The AI Engineering Podcast is your guide to the fast-moving world of building AI systems. Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com with your story. Links Datafold Copilot Cursor IDE AI Agents DataChat AI Engineering Podcast Episode Metrics Layer Emacs LangChain LangGraph CrewAI The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA…
D
Data Engineering Podcast

1 Evolving Responsibilities in AI Data Management 38:57
38:57
Daha Sonra Çal
Daha Sonra Çal
Listeler
Beğen
Beğenildi38:57
Summary In this episode of the Data Engineering Podcast Bartosz Mikulski talks about preparing data for AI applications. Bartosz shares his journey from data engineering to MLOps and emphasizes the importance of data testing over software development in AI contexts. He discusses the types of data assets required for AI applications, including extensive test datasets, especially in generative AI, and explains the differences in data requirements for various AI application styles. The conversation also explores the skills data engineers need to transition into AI, such as familiarity with vector databases and new data modeling strategies, and highlights the challenges of evolving AI applications, including frequent reprocessing of data when changing chunking strategies or embedding models. Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data management Data migrations are brutal. They drag on for months—sometimes years—burning through resources and crushing team morale. Datafold's AI-powered Migration Agent changes all that. Their unique combination of AI code translation and automated data validation has helped companies complete migrations up to 10 times faster than manual approaches. And they're so confident in their solution, they'll actually guarantee your timeline in writing. Ready to turn your year-long migration into weeks? Visit dataengineeringpodcast.com/datafold today for the details. Your host is Tobias Macey and today I'm interviewing Bartosz Mikulski about how to prepare data for use in AI applications Interview Introduction How did you get involved in the area of data management? Can you start by outlining some of the main categories of data assets that are needed for AI applications? How does the nature of the application change those requirements? (e.g. RAG app vs. agent, etc.) How do the different assets map to the stages of the application lifecycle? What are some of the common roles and divisions of responsibility that you see in the construction and operation of a "typical" AI application? For data engineers who are used to data warehousing/BI, what are the skills that map to AI apps? What are some of the data modeling patterns that are needed to support AI apps? chunking strategies metadata management What are the new categories of data that data engineers need to manage in the context of AI applications? agent memory generation/evolution conversation history management data collection for fine tuning What are some of the notable evolutions in the space of AI applications and their patterns that have happened in the past ~1-2 years that relate to the responsibilities of data engineers? What are some of the skills gaps that teams should be aware of and identify training opportunities for? What are the most interesting, innovative, or unexpected ways that you have seen data teams address the needs of AI applications? What are the most interesting, unexpected, or challenging lessons that you have learned while working on AI applications and their reliance on data? What are some of the emerging trends that you are paying particular attention to? Contact Info Website LinkedIn Parting Question From your perspective, what is the biggest gap in the tooling or technology for data management today? Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used. The AI Engineering Podcast is your guide to the fast-moving world of building AI systems. Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com with your story. Links Spark Ray Chunking Strategies Hypothetical document embeddings Model Fine Tuning Prompt Compression The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA…
D
Data Engineering Podcast

1 CSVs Will Never Die And OneSchema Is Counting On It 54:40
54:40
Daha Sonra Çal
Daha Sonra Çal
Listeler
Beğen
Beğenildi54:40
Summary In this episode of the Data Engineering Podcast Andrew Luo, CEO of OneSchema, talks about handling CSV data in business operations. Andrew shares his background in data engineering and CRM migration, which led to the creation of OneSchema, a platform designed to automate CSV imports and improve data validation processes. He discusses the challenges of working with CSVs, including inconsistent type representation, lack of schema information, and technical complexities, and explains how OneSchema addresses these issues using multiple CSV parsers and AI for data type inference and validation. Andrew highlights the business case for OneSchema, emphasizing efficiency gains for companies dealing with large volumes of CSV data, and shares plans to expand support for other data formats and integrate AI-driven transformation packs for specific industries. Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data management Data migrations are brutal. They drag on for months—sometimes years—burning through resources and crushing team morale. Datafold's AI-powered Migration Agent changes all that. Their unique combination of AI code translation and automated data validation has helped companies complete migrations up to 10 times faster than manual approaches. And they're so confident in their solution, they'll actually guarantee your timeline in writing. Ready to turn your year-long migration into weeks? Visit dataengineeringpodcast.com/datafold today for the details. Your host is Tobias Macey and today I'm interviewing Andrew Luo about how OneSchema addresses the headaches of dealing with CSV data for your business Interview Introduction How did you get involved in the area of data management? Despite the years of evolution and improvement in data storage and interchange formats, CSVs are just as prevalent as ever. What are your opinions/theories on why they are so ubiquitous? What are some of the major sources of CSV data for teams that rely on them for business and analytical processes? The most obvious challenge with CSVs is their lack of type information, but they are notorious for having numerous other problems. What are some of the other major challenges involved with using CSVs for data interchange/ingestion? Can you describe what you are building at OneSchema and the story behind it? What are the core problems that you are solving, and for whom? Can you describe how you have architected your platform to be able to manage the variety, volume, and multi-tenancy of data that you process? How have the design and goals of the product changed since you first started working on it? What are some of the major performance issues that you have encountered while dealing with CSV data at scale? What are some of the most surprising things that you have learned about CSVs in the process of building OneSchema? What are the most interesting, innovative, or unexpected ways that you have seen OneSchema used? What are the most interesting, unexpected, or challenging lessons that you have learned while working on OneSchema? When is OneSchema the wrong choice? What do you have planned for the future of OneSchema? Contact Info LinkedIn Parting Question From your perspective, what is the biggest gap in the tooling or technology for data management today? Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used. The AI Engineering Podcast is your guide to the fast-moving world of building AI systems. Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com with your story. Links OneSchema EDI == Electronic Data Interchange UTF-8 BOM (Byte Order Mark) Characters SOAP CSV RFC Iceberg SSIS == SQL Server Integration Services MS Access Datafusion JSON Schema SFTP == Secure File Transfer Protocol The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA…
D
Data Engineering Podcast

1 Breaking Down Data Silos: AI and ML in Master Data Management 57:30
57:30
Daha Sonra Çal
Daha Sonra Çal
Listeler
Beğen
Beğenildi57:30
Summary In this episode of the Data Engineering Podcast Dan Bruckner, co-founder and CTO of Tamr, talks about the application of machine learning (ML) and artificial intelligence (AI) in master data management (MDM). Dan shares his journey from working at CERN to becoming a data expert and discusses the challenges of reconciling large-scale organizational data. He explains how data silos arise from independent teams and highlights the importance of combining traditional techniques with modern AI to address the nuances of data reconciliation. Dan emphasizes the transformative potential of large language models (LLMs) in creating more natural user experiences, improving trust in AI-driven data solutions, and simplifying complex data management processes. He also discusses the balance between using AI for complex data problems and the necessity of human oversight to ensure accuracy and trust. Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data management Data migrations are brutal. They drag on for months—sometimes years—burning through resources and crushing team morale. Datafold's AI-powered Migration Agent changes all that. Their unique combination of AI code translation and automated data validation has helped companies complete migrations up to 10 times faster than manual approaches. And they're so confident in their solution, they'll actually guarantee your timeline in writing. Ready to turn your year-long migration into weeks? Visit dataengineeringpodcast.com/datafold today for the details. As a listener of the Data Engineering Podcast you clearly care about data and how it affects your organization and the world. For even more perspective on the ways that data impacts everything around us don't miss Data Citizens® Dialogues, the forward-thinking podcast brought to you by Collibra. You'll get further insights from industry leaders, innovators, and executives in the world's largest companies on the topics that are top of mind for everyone. In every episode of Data Citizens® Dialogues, industry leaders unpack data’s impact on the world; like in their episode “The Secret Sauce Behind McDonald’s Data Strategy”, which digs into how AI-driven tools can be used to support crew efficiency and customer interactions. In particular I appreciate the ability to hear about the challenges that enterprise scale businesses are tackling in this fast-moving field. The Data Citizens Dialogues podcast is bringing the data conversation to you, so start listening now! Follow Data Citizens Dialogues on Apple, Spotify, YouTube, or wherever you get your podcasts. Your host is Tobias Macey and today I'm interviewing Dan Bruckner about the application of ML and AI techniques to the challenge of reconciling data at the scale of business Interview Introduction How did you get involved in the area of data management? Can you start by giving an overview of the different ways that organizational data becomes unwieldy and needs to be consolidated and reconciled? How does that reconciliation relate to the practice of "master data management" What are the scaling challenges with the current set of practices for reconciling data? ML has been applied to data cleaning for a long time in the form of entity resolution, etc. How has the landscape evolved or matured in recent years? What (if any) transformative capabilities do LLMs introduce? What are the missing pieces/improvements that are necessary to make current AI systems usable out-of-the-box for data cleaning? What are the strategic decisions that need to be addressed when implementing ML/AI techniques in the data cleaning/reconciliation process? What are the risks involved in bringing ML to bear on data cleaning for inexperienced teams? What are the most interesting, innovative, or unexpected ways that you have seen ML techniques used in data resolution? What are the most interesting, unexpected, or challenging lessons that you have learned while working on using ML/AI in master data management? When is ML/AI the wrong choice for data cleaning/reconciliation? What are your hopes/predictions for the future of ML/AI applications in MDM and data cleaning? Contact Info LinkedIn Parting Question From your perspective, what is the biggest gap in the tooling or technology for data management today? Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used. The AI Engineering Podcast is your guide to the fast-moving world of building AI systems. Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com with your story. Links Tamr Master Data Management CERN LHC Michael Stonebraker Conway's Law Expert Systems Information Retrieval Active Learning The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA…
D
Data Engineering Podcast

1 Building a Data Vision Board: A Guide to Strategic Planning 49:59
49:59
Daha Sonra Çal
Daha Sonra Çal
Listeler
Beğen
Beğenildi49:59
Summary In this episode of the Data Engineering Podcast Lior Barak shares his insights on developing a three-year strategic vision for data management. He discusses the importance of having a strategic plan for data, highlighting the need for data teams to focus on impact rather than just enablement. He introduces the concept of a "data vision board" and explains how it can help organizations outline their strategic vision by considering three key forces: regulation, stakeholders, and organizational goals. Lior emphasizes the importance of balancing short-term pressures with long-term strategic goals, quantifying the cost of data issues to prioritize effectively, and maintaining the strategic vision as a living document through regular reviews. He encourages data teams to shift from being enablers to impact creators and provides practical advice on implementing a data vision board, setting clear KPIs, and embracing a product mindset to create tangible business impacts through strategic data management. Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data management It’s 2024, why are we still doing data migrations by hand? Teams spend months—sometimes years—manually converting queries and validating data, burning resources and crushing morale. Datafold's AI-powered Migration Agent brings migrations into the modern era. Their unique combination of AI code translation and automated data validation has helped companies complete migrations up to 10 times faster than manual approaches. And they're so confident in their solution, they'll actually guarantee your timeline in writing. Ready to turn your year-long migration into weeks? Visit dataengineeringpodcast.com/datafold today to learn how Datafold can automate your migration and ensure source to target parity. Your host is Tobias Macey and today I'm interviewing Lior Barak about how to develop your three year strategic vision for data Interview Introduction How did you get involved in the area of data management? Can you start by giving an outline of the types of problems that occur as a result of not developing a strategic plan for an organization's data systems? What is the format that you recommend for capturing that strategic vision? What are the types of decisions and details that you believe should be included in a vision statement? Why is a 3 year horizon beneficial? What does that scale of time encourage/discourage in the debate and decision-making process? Who are the personas that should be included in the process of developing this strategy document? Can you walk us through the steps and processes involved in developing the data vision board for an organization? What are the time-frames or milestones that should lead to revisiting and revising the strategic objectives? What are the most interesting, innovative, or unexpected ways that you have seen a data vision strategy used? What are the most interesting, unexpected, or challenging lessons that you have learned while working on data strategy development? When is a data vision board the wrong choice? What are some additional resources or practices that you recommend teams invest in as a supplement to this strategic vision exercise? Contact Info LinkedIn Substack Parting Question From your perspective, what is the biggest gap in the tooling or technology for data management today? Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used. The AI Engineering Podcast is your guide to the fast-moving world of building AI systems. Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com with your story. Links Vision Board Overview Episode 397: Defining A Strategy For Your Data Products Minto Pyramid Principle KPI == Key Performance Indicator OKR == Objectives and Key Results Phil Jackson: Eleven Rings (affiliate link) The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA…
D
Data Engineering Podcast

1 How Orchestration Impacts Data Platform Architecture 59:39
59:39
Daha Sonra Çal
Daha Sonra Çal
Listeler
Beğen
Beğenildi59:39
Summary The core task of data engineering is managing the flows of data through an organization. In order to ensure those flows are executing on schedule and without error is the role of the data orchestrator. Which orchestration engine you choose impacts the ways that you architect the rest of your data platform. In this episode Hugo Lu shares his thoughts as the founder of an orchestration company on how to think about data orchestration and data platform design as we navigate the current era of data engineering. Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data management It’s 2024, why are we still doing data migrations by hand? Teams spend months—sometimes years—manually converting queries and validating data, burning resources and crushing morale. Datafold's AI-powered Migration Agent brings migrations into the modern era. Their unique combination of AI code translation and automated data validation has helped companies complete migrations up to 10 times faster than manual approaches. And they're so confident in their solution, they'll actually guarantee your timeline in writing. Ready to turn your year-long migration into weeks? Visit dataengineeringpodcast.com/datafold today to learn how Datafold can automate your migration and ensure source to target parity. As a listener of the Data Engineering Podcast you clearly care about data and how it affects your organization and the world. For even more perspective on the ways that data impacts everything around us don't miss Data Citizens® Dialogues, the forward-thinking podcast brought to you by Collibra. You'll get further insights from industry leaders, innovators, and executives in the world's largest companies on the topics that are top of mind for everyone. In every episode of Data Citizens® Dialogues, industry leaders unpack data’s impact on the world, from big picture questions like AI governance and data sharing to more nuanced questions like, how do we balance offense and defense in data management? In particular I appreciate the ability to hear about the challenges that enterprise scale businesses are tackling in this fast-moving field. The Data Citizens Dialogues podcast is bringing the data conversation to you, so start listening now! Follow Data Citizens Dialogues on Apple, Spotify, YouTube, or wherever you get your podcasts. Your host is Tobias Macey and today I'm interviewing Hugo Lu about the data platform and orchestration ecosystem and how to navigate the available options Interview Introduction How did you get involved in building data platforms? Can you describe what an orchestrator is in the context of data platforms? There are many other contexts in which orchestration is necessary. What are some examples of how orchestrators have adapted (or failed to adapt) to the times? What are the core features that are necessary for an orchestrator to have when dealing with data-oriented workflows? Beyond the bare necessities, what are some of the other features and design considerations that go into building a first-class dat platform or orchestration system? There have been several generations of orchestration engines over the past several years. How would you characterize the different coarse groupings of orchestration engines across those generational boundaries? How do the characteristics of a data orchestrator influence the overarching architecture of an organization's data platform/data operations? What about the reverse? How have the cycles of ML and AI workflow requirements impacted the design requirements for data orchestrators? What are the most interesting, innovative, or unexpected ways that you have seen data orchestrators used? What are the most interesting, unexpected, or challenging lessons that you have learned while working on data orchestration? When is an orchestrator the wrong choice? What are your predictions and/or hopes for the future of data orchestration? Contact Info Medium LinkedIn Parting Question From your perspective, what is the biggest thing data teams are missing in the technology today? Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used. The AI Engineering Podcast is your guide to the fast-moving world of building AI systems. Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com with your story. Links Orchestra Previous Episode: Overview Of The State Of Data Orchestration Cron ArgoCD DAG Kubernetes Data Mesh Airflow SSIS == SQL Server Integration Services Pentaho Kettle DataVolo NiFi Podcast Episode Dagster gRPC Coalesce Podcast Episode dbt DataHub Palantir The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA…
D
Data Engineering Podcast

1 An Exploration Of The Impediments To Reusable Data Pipelines 51:32
51:32
Daha Sonra Çal
Daha Sonra Çal
Listeler
Beğen
Beğenildi51:32
Summary In this episode of the Data Engineering Podcast the inimitable Max Beauchemin talks about reusability in data pipelines. The conversation explores the "write everything twice" problem, where similar pipelines are built without code reuse, and discusses the challenges of managing different SQL dialects and relational databases. Max also touches on the evolving role of data engineers, drawing parallels with front-end engineering, and suggests that generative AI could facilitate knowledge capture and distribution in data engineering. He encourages the community to share reference implementations and templates to foster collaboration and innovation, and expresses hopes for a future where code reuse becomes more prevalent. Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data management Data migrations are brutal. They drag on for months—sometimes years—burning through resources and crushing team morale. Datafold's AI-powered Migration Agent changes all that. Their unique combination of AI code translation and automated data validation has helped companies complete migrations up to 10 times faster than manual approaches. And they're so confident in their solution, they'll actually guarantee your timeline in writing. Ready to turn your year-long migration into weeks? Visit dataengineeringpodcast.com/datafold today for the details. Your host is Tobias Macey and today I'm joined again by Max Beauchemin to talk about the challenges of reusability in data pipelines Interview Introduction How did you get involved in the area of data management? Can you start by sharing your current thesis on the opportunities and shortcomings of code and component reusability in the data context? What are some ways that you think about what constitutes a "component" in this context? The data ecosystem has arguably grown more varied and nuanced in recent years. At the same time, the number and maturity of tools has grown. What is your view on the current trend in productivity for data teams and practitioners? What do you see as the core impediments to building more reusable and general-purpose solutions in data engineering? How can we balance the actual needs of data consumers against their requests (whether well- or un-informed) to help increase our ability to better design our workflows for reuse? In data engineering there are two broad approaches; code-focused or SQL-focused pipelines. In principle one would think that code-focused environments would have better composability. What are you seeing as the realities in your personal experience and what you hear from other teams? When it comes to SQL dialects, dbt offers the option of Jinja macros, whereas SDF and SQLMesh offer automatic translation. There are also tools like PRQL and Malloy that aim to abstract away the underlying SQL. What are the tradeoffs across those options that help or hinder the portability of transformation logic? Which layers of the data stack/steps in the data journey do you see the greatest opportunity for improving the creation of more broadly usable abstractions/reusable elements? low/no code systems for code reuse impact of LLMs on reusability/composition impact of background on industry practices (e.g. DBAs, sysadmins, analysts vs. SWE, etc.) polymorphic data models (e.g. activity schema) What are the most interesting, innovative, or unexpected ways that you have seen teams address composability and reusability of data components? What are the most interesting, unexpected, or challenging lessons that you have learned while working on data-oriented tools and utilities? What are your hopes and predictions for sharing of code and logic in the future of data engineering? Contact Info LinkedIn Parting Question From your perspective, what is the biggest gap in the tooling or technology for data management today? Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used. The AI Engineering Podcast is your guide to the fast-moving world of building AI systems. Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com with your story. Links Max's Blog Post Airflow Superset Tableau Looker PowerBI Cohort Analysis NextJS Airbyte Podcast Episode Fivetran Podcast Episode Segment dbt SQLMesh Podcast Episode Spark LAMP Stack PHP Relational Algebra Knowledge Graph Python Marshmallow Data Warehouse Lifecycle Toolkit (affiliate link) Entity Centric Data Modeling Blog Post Amplitude OSACon presentation ol-data-platform Tobias' team's data platform code The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA…
D
Data Engineering Podcast

1 The Art of Database Selection and Evolution 59:56
59:56
Daha Sonra Çal
Daha Sonra Çal
Listeler
Beğen
Beğenildi59:56
Summary In this episode of the Data Engineering Podcast Sam Kleinman talks about the pivotal role of databases in software engineering. Sam shares his journey into the world of data and discusses the complexities of database selection, highlighting the trade-offs between different database architectures and how these choices affect system design, query performance, and the need for ETL processes. He emphasizes the importance of understanding specific requirements to choose the right database engine and warns against over-engineering solutions that can lead to increased complexity. Sam also touches on the tendency of engineers to move logic to the application layer due to skepticism about database longevity and advises teams to leverage database capabilities instead. Finally, he identifies a significant gap in data management tooling: the lack of easy-to-use testing tools for database interactions, highlighting the need for better testing paradigms to ensure reliability and reduce bugs in data-driven applications. Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data management It’s 2024, why are we still doing data migrations by hand? Teams spend months—sometimes years—manually converting queries and validating data, burning resources and crushing morale. Datafold's AI-powered Migration Agent brings migrations into the modern era. Their unique combination of AI code translation and automated data validation has helped companies complete migrations up to 10 times faster than manual approaches. And they're so confident in their solution, they'll actually guarantee your timeline in writing. Ready to turn your year-long migration into weeks? Visit dataengineeringpodcast.com/datafold today to learn how Datafold can automate your migration and ensure source to target parity. Your host is Tobias Macey and today I'm interviewing Sam Kleinman about database tradeoffs across operating environments and axes of scale Interview Introduction How did you get involved in the area of data management? The database engine you use has a substantial impact on how you architect your overall system. When starting a greenfield project, what do you see as the most important factor to consider when selecting a database? points of friction introduced by database capabilities embedded databases (e.g. SQLite, DuckDB, LanceDB), when to use and when do they become a bottleneck single-node database engines (e.g. Postgres, MySQL), when are they legitimately a problem distributed databases (e.g. CockroachDB, PlanetScale, MongoDB) polyglot storage vs. general-purpose/multimodal databases federated queries, benefits and limitations ease of integration vs. variability of performance and access control Contact Info LinkedIn GitHub Parting Question From your perspective, what is the biggest gap in the tooling or technology for data management today? Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used. The AI Engineering Podcast is your guide to the fast-moving world of building AI systems. Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com with your story. Links MongoDB Neon Podcast Episode GlareDB NoSQL S3 Conditional Write Event driven architecture CockroachDB Couchbase Cassandra The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA…
D
Data Engineering Podcast

1 Bridging Code and UI in Data Orchestration with Kestra 44:30
44:30
Daha Sonra Çal
Daha Sonra Çal
Listeler
Beğen
Beğenildi44:30
Summary In this episode of the Data Engineering Podcast, Anna Geller talks about the integration of code and UI-driven interfaces for data orchestration. Anna defines data orchestration as automating the coordination of workflow nodes that interact with data across various business functions, discussing how it goes beyond ETL and analytics to enable real-time data processing across different internal systems. She explores the challenges of using existing scheduling tools for data-specific workflows, highlighting limitations and anti-patterns, and discusses Kestra's solution, a low-code orchestration platform that combines code-driven flexibility with UI-driven simplicity. Anna delves into Kestra's architectural design, API-first approach, and pluggable infrastructure, and shares insights on balancing UI and code-driven workflows, the challenges of open-core business models, and innovative user applications of Kestra's platform. Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data management Data migrations are brutal. They drag on for months—sometimes years—burning through resources and crushing team morale. Datafold's AI-powered Migration Agent changes all that. Their unique combination of AI code translation and automated data validation has helped companies complete migrations up to 10 times faster than manual approaches. And they're so confident in their solution, they'll actually guarantee your timeline in writing. Ready to turn your year-long migration into weeks? Visit dataengineeringpodcast.com/datafold today for the details. As a listener of the Data Engineering Podcast you clearly care about data and how it affects your organization and the world. For even more perspective on the ways that data impacts everything around us you should listen to Data Citizens® Dialogues, the forward-thinking podcast from the folks at Collibra. You'll get further insights from industry leaders, innovators, and executives in the world's largest companies on the topics that are top of mind for everyone. They address questions around AI governance, data sharing, and working at global scale. In particular I appreciate the ability to hear about the challenges that enterprise scale businesses are tackling in this fast-moving field. While data is shaping our world, Data Citizens Dialogues is shaping the conversation. Subscribe to Data Citizens Dialogues on Apple, Spotify, Youtube, or wherever you get your podcasts. Your host is Tobias Macey and today I'm interviewing Anna Geller about incorporating both code and UI driven interfaces for data orchestration Interview Introduction How did you get involved in the area of data management? Can you start by sharing a definition of what constitutes "data orchestration"? There are many orchestration and scheduling systems that exist in other contexts (e.g. CI/CD systems, Kubernetes, etc.). Those are often adapted to data workflows because they already exist in the organizational context. What are the anti-patterns and limitations that approach introduces in data workflows? What are the problems that exist in the opposite direction of using data orchestrators for CI/CD, etc.? Data orchestrators have been around for decades, with many different generations and opinions about how and by whom they are used. What do you see as the main motivation for UI vs. code-driven workflows? What are the benefits of combining code-driven and UI-driven capabilities in a single orchestrator? What constraints does it necessitate to allow for interoperability between those modalities? Data Orchestrators need to integrate with many external systems. How does Kestra approach building integrations and ensure governance for all their underlying configurations? Managing workflows at scale across teams can be challenging in terms of providing structure and visibility of dependencies across workflows and teams. What features does Kestra offer so that all pipelines and teams stay organised? What are the most interesting, innovative, or unexpected ways that you have seen Kestra used? What are the most interesting, unexpected, or challenging lessons that you have learned while working on Kestra? When is Kestra the wrong choice? What do you have planned for the future of Kestra? Contact Info LinkedIn Blog Parting Question From your perspective, what is the biggest gap in the tooling or technology for data management today? Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used. The AI Engineering Podcast is your guide to the fast-moving world of building AI systems. Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com with your story. Links Kestra CI/CD State Machine AWS Lambda GitHub Actions ECS Fargate Airflow Kafka Elasticsearch Airflow XCom The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA In this episode of the Data Engineering Podcast, host Tobias Macy interviews Anna Geller, a data engineer turned product manager, about the integration of code and UI-driven interfaces for data orchestration. Anna shares her journey from working with data during an internship at KPMG to her current role as a product lead at Kestra. She provides her insights into the concept of data orchestration, emphasizing its broader scope beyond just ETL and analytics, and discusses the challenges and anti-patterns that arise when using existing scheduling systems for data-specific workflows. Anna explains the overlap between CI/CD, scheduling, and orchestration tools, and the limitations that occur when these tools are used for data workflows. She highlights the importance of visibility and governance at scale and the need for a dedicated orchestrator like Kestra. The conversation also delves into the challenges of using data orchestrators for non-data workflows and the benefits of combining code and UI-driven approaches. Anna discusses Kestra's architecture, which supports both JDBC and Kafka backends, and its focus on API-first interactions. She explains how Kestra handles task granularity, inputs, and outputs, and the flexibility provided by its plugin system. The episode also explores Kestra's approach to data as assets, the target audience for Kestra, and how it bridges different workflows across organizational boundaries. The discussion touches on Kestra's open-core model, the challenges of balancing open-source and enterprise features, and the innovative ways Kestra is being applied. Anna shares insights into Kestra's local development experience, the lessons learned in building the product, and the upcoming features and projects that Kestra is excited to explore.…
D
Data Engineering Podcast

1 Streaming Data Into The Lakehouse With Iceberg And Trino At Going 39:49
39:49
Daha Sonra Çal
Daha Sonra Çal
Listeler
Beğen
Beğenildi39:49
In this episode, I had the pleasure of speaking with Ken Pickering, VP of Engineering at Going, about the intricacies of streaming data into a Trino and Iceberg lakehouse. Ken shared his journey from product engineering to becoming deeply involved in data-centric roles, highlighting his experiences in ecommerce and InsurTech. At Going, Ken leads the data platform team, focusing on finding travel deals for consumers, a task that involves handling massive volumes of flight data and event stream information. Ken explained the dual approach of passive and active search strategies used by Going to manage the vast data landscape. Passive search involves aggregating data from global distribution systems, while active search is more transactional, querying specific flight prices. This approach helps Going sift through approximately 50 petabytes of data annually to identify the best travel deals. We delved into the technical architecture supporting these operations, including the use of Confluent for data streaming, Starburst Galaxy for transformation, and Databricks for modeling. Ken emphasized the importance of an open lakehouse architecture, which allows for flexibility and scalability as the business grows. Ken also discussed the composition of Going's engineering and data teams, highlighting the collaborative nature of their work and the reliance on vendor tooling to streamline operations. He shared insights into the challenges and strategies of managing data life cycles, ensuring data quality, and maintaining uptime for consumer-facing applications. Throughout our conversation, Ken provided a glimpse into the future of Going's data architecture, including potential expansions into other travel modes and the integration of large language models for enhanced customer interaction. This episode offers a comprehensive look at the complexities and innovations in building a data-driven travel advisory service.…
D
Data Engineering Podcast

1 An Opinionated Look At End-to-end Code Only Analytical Workflows With Bruin 56:11
56:11
Daha Sonra Çal
Daha Sonra Çal
Listeler
Beğen
Beğenildi56:11
Summary The challenges of integrating all of the tools in the modern data stack has led to a new generation of tools that focus on a fully integrated workflow. At the same time, there have been many approaches to how much of the workflow is driven by code vs. not. Burak Karakan is of the opinion that a fully integrated workflow that is driven entirely by code offers a beneficial and productive means of generating useful analytical outcomes. In this episode he shares how Bruin builds on those opinions and how you can use it to build your own analytics without having to cobble together a suite of tools with conflicting abstractions. Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data management Imagine catching data issues before they snowball into bigger problems. That’s what Datafold’s new Monitors do. With automatic monitoring for cross-database data diffs, schema changes, key metrics, and custom data tests, you can catch discrepancies and anomalies in real time, right at the source. Whether it’s maintaining data integrity or preventing costly mistakes, Datafold Monitors give you the visibility and control you need to keep your entire data stack running smoothly. Want to stop issues before they hit production? Learn more at dataengineeringpodcast.com/datafold today! Your host is Tobias Macey and today I'm interviewing Burak Karakan about the benefits of building code-only data systems Interview Introduction How did you get involved in the area of data management? Can you describe what Bruin is and the story behind it? Who is your target audience? There are numerous tools that address the ETL workflow for analytical data. What are the pain points that you are focused on for your target users? How does a code-only approach to data pipelines help in addressing the pain points of analytical workflows? How might it act as a limiting factor for organizational involvement? Can you describe how Bruin is designed? How have the design and scope of Bruin evolved since you first started working on it? You call out the ability to mix SQL and Python for transformation pipelines. What are the components that allow for that functionality? What are some of the ways that the combination of Python and SQL improves ergonomics of transformation workflows? What are the key features of Bruin that help to streamline the efforts of organizations building analytical systems? Can you describe the workflow of someone going from source data to warehouse and dashboard using Bruin and Ingestr? What are the opportunities for contributions to Bruin and Ingestr to expand their capabilities? What are the most interesting, innovative, or unexpected ways that you have seen Bruin and Ingestr used? What are the most interesting, unexpected, or challenging lessons that you have learned while working on Bruin? When is Bruin the wrong choice? What do you have planned for the future of Bruin? Contact Info LinkedIn Parting Question From your perspective, what is the biggest gap in the tooling or technology for data management today? Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used. The AI Engineering Podcast is your guide to the fast-moving world of building AI systems. Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com with your story. Links Bruin Fivetran Stitch Ingestr Bruin CLI Meltano SQLGlot dbt SQLMesh Podcast Episode SDF Podcast Episode Airflow Dagster Snowpark Atlan Evidence The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA…
Player FM'e Hoş Geldiniz!
Player FM şu anda sizin için internetteki yüksek kalitedeki podcast'leri arıyor. En iyi podcast uygulaması ve Android, iPhone ve internet üzerinde çalışıyor. Aboneliklerinizi cihazlar arasında eş zamanlamak için üye olun.