Articles

Example Article: Testing the Automated Workflow

This is an example article created to test the automated Calmly Writer β†’ GitHub Pages workflow.

What This Article Demonstrates

When this file is committed and pushed to GitHub, the automated workflow will:

  1. Validate the frontmatter - Check that the title exists
  2. Auto-generate the date - Add the current timestamp from file modification time
  3. Auto-add metadata - Set draft: false and tags: []
  4. Process images - Copy any images from ./images/ to /static/images/
  5. Migrate the file - Move this from /drafts/ to /content/articles/
  6. Build and deploy - Hugo builds the site and deploys to GitHub Pages

Testing Image Processing (Optional)

To test image processing, you can add an image:

Building Resilient Data Pipelines with SQLMesh: A Modern Alternative to dbt

SQLMesh is emerging as a powerful alternative to traditional data transformation tools like dbt, offering better performance, smarter incremental processing, and more robust data pipeline management.

In this deep dive, I’ll explore how SQLMesh’s approach to data transformations can solve common pipeline challenges that keep data engineers up at night.

What Makes SQLMesh Different

  • Intelligent incremental processing
  • Built-in data quality checks
  • Advanced dependency management
  • Performance optimization

Real-World Implementation

Coming soon: hands-on examples of migrating from dbt to SQLMesh, performance comparisons, and production deployment strategies.

Welcome to datanyblles

Welcome to datanyblles - my digital space for exploring data engineering, building robust pipelines, and sharing insights from the trenches of data infrastructure.

This blog will cover:

  • Data pipeline architectures
  • Tools and technologies in the data stack
  • Best practices for data engineering
  • Real-world challenges and solutions

Let’s build something amazing with data! πŸ“Š

Welcome to datanyblles

Planted August 25, 2025

Welcome to datanyblles - my digital space for exploring data engineering, building robust pipelines, and sharing insights from the trenches of data infrastructure.

This blog will cover:

  • Data pipeline architectures
  • Tools and technologies in the data stack
  • Best practices for data engineering
  • Real-world challenges and solutions

Let’s build something amazing with data! πŸ“Š