After 15 years as a backend engineer, I’ve configured my share of message queues. RabbitMQ, Redis, SQS, Kafka – they’re all powerful tools. But last year, I found myself setting up RabbitMQ for a small side project, and I couldn’t shake one thought: “This is overkill.” I just needed to process some background tasks. Instead, I was reading about exchanges, bindings, and durability settings. An hour later, I was still configuring. That’s when I decided to build TLQ – Tiny Little Queue.
The Problem with Existing Queues
Don’t get me wrong – RabbitMQ and others are excellent. But for many projects, they’re like bringing a forklift to move a chair. When you’re building a proof of concept, testing microservices locally, or running a small startup project, you need something that just works. No cluster setup. No persistence configuration. No authentication complexity. You need a queue that gets out of the way.
Enter TLQ
TLQ is intentionally simple. You run it with one command:
docker run -p 1337:1337 nebojsa/tlq
That’s it. Your queue is running. Add a message, get a message, delete a message. No configuration files, no admin panels, no complexity.
Why Rust?
I chose Rust for two reasons. First, I wanted to improve my Rust skills – there’s no better way to learn than building something real. Second, Rust gives us memory safety and performance without the overhead of a garbage collector. Perfect for a tool meant to be lightweight. The result? TLQ uses minimal resources and starts instantly. It’s the kind of tool you can run on a Raspberry Pi without thinking twice.
Design Decisions
Every feature in TLQ was a conscious choice:
- In-memory only: No persistence means no disk I/O, no data corruption, no recovery modes. When you’re developing, you don’t need durability – you need speed.
- 64KB message limit: This prevents abuse and keeps things snappy. If you need to send larger payloads, you’re probably ready for a more complex solution.
- Single node: No clustering, no consensus protocols, no split-brain problems. Just a queue.
- Multiple language support: I built clients for Rust, Node.js, Python, and Go. Use whatever language your project needs.
Who Is This For?
TLQ isn’t trying to replace production message queues. It’s for:
- Developers who want to test message-driven architectures locally
- Startups building MVPs who need queues without the ops overhead
- Students learning about distributed systems
- Side projects where simplicity beats features
Think of it like SQLite. You wouldn’t run Wikipedia on SQLite, but it’s perfect for development and smaller applications.
Six Months Later
I started TLQ six months ago as a learning project. Today, it has official clients in four languages, Docker support, and comprehensive tests. The core philosophy hasn’t changed: stay tiny, stay simple. The name says it all – Tiny Little Queue. It’s not trying to be everything. It’s trying to be just enough.
Try It
If you’ve ever thought, “I just need a simple queue,” give TLQ a shot:
# Install with Cargo
cargo install tlq
# Or run with Docker
docker run -p 1337:1337 nebojsa/tlq
# Add a message
curl -X POST localhost:1337/add \
-H "Content-Type: application/json" \
-d '{"body":"Hello TLQ!"}'
Check out the GitHub repo or visit tinylittlequeue.app for more examples.
Moving Forward
TLQ is still young (v0.2.0), but it’s ready for testing. I’d love to hear your feedback. What works? What doesn’t? What’s missing that shouldn’t be? Sometimes the best solution isn’t the most powerful one. Sometimes it’s the one that gets out of your way and lets you build. That’s why I built TLQ.
If you’re interested in TLQ or have feedback, open an issue or reach out. I’d love to hear how you’re using it.