Seeking Heroku Add-on Alpha Testers for

heroku, rulesio

Do you have a Ruby on Rails app running on Heroku?
We need your help!

Because we wanted to offer our rules engine as a heroku add-on as fast as possible and are looking for some feedback on what we have done so far.

With you can connect the event data from your users with 3rd party SaaS tools like Mailchimp, Urban Airship and Sendgrid and put some logic=rules in between.
To make it more convenient we have a ruby gem available and some example rules will be added to your account once you’ve registered.

How to participate?
Send us an email with the email address from your heroku account and a short sentence that you agree to be invited to our alpha test to team[at] You will receive an invitation from us and will be able to choose our add-on for your app on heroku by typing

heroku addons:add rulesio:test

into your command line.

You have plans to have a heroku add-on?
Let us know and we will be happy to alpha test your add-on as well!

The Last Gem You’ll Ever Need?

geekier, open source

At we have benefited from many open source projects, such as Ruby on Rails, D3, and Ember. Now we want to start giving back, with a project called Geekier.

Background: so many APIs, so many gems

It is increasingly the case that building any sort of application, be it for the web, or mobile, or desktop, means connecting to several online services via APIs. As a developer, I mostly view this as a good thing, because it means I can offload all sorts of things that I care about doing well but that aren’t central to how I provide my unique value. I want things like payment handling, exception reporting, and analytics, but I don’t want to build them all myself.

As a frequent consumer of APIs, I’ve looked at lots of API specs and libraries over the years. If you are an API provider, and you’re thinking about writing a client library (or ruby gem, or python egg, or …) then I have one piece of advice for you: don’t do it.

For every worthwhile API out there, there’s at least a handful of Ruby gems I can choose from. And often, they all suck. I’m sick of reading the docs, looking over the code and tests, examining the dependencies for troublemakers, checking to see if issues are being addressed, etc, etc.

Give me data, not code, please

When we started building we knew we would be talking to lots of APIs, and we knew that was going to become painful unless we took drastic action. So we adopted a data-driven approach. For each API we integrate we don’t pay any attention to whatever gems may exist; instead, we read the documentation and create a YAML file that describes how the API works.

This has worked out great for us. We have a single library called Geekier (based on Faraday) that works off of these API descriptions to connect to all of the APIs we care about. Any effort we put into Geekier – to do better parameter validation, or logging, or error reporting, for example – gives us value across all of those APIs.

Where we’re going

The next step in making this more awesome would be for more of us to share this perspective on how to work with APIs, and for the community to start sharing these API descriptions. This would be a big win for everyone using APIs.

This will also be a big win for API providers. If your API is so complex in its behavior that it can only be fully and accurately described with client code that you write by hand, you’re doing something wrong. Instead, if you describe your API with data rather than code, then as your API evolves, you won’t have to update and maintain a set of client libraries in various languages. You will simply create a new API description for the new version, and all of the clients will immediately be able to use the new API, regardless of whether they’re using Javascript, Ruby, Python, Java, Scala, etc.

To help move this along, we’re working on extracting Geekier from so that we can release it as its own open source gem. We’re also moving from our own homegrown API description language to an emerging standard: Swagger.

Is Geekier the last gem you’ll ever need? No, but it will help.

Update: Join the ongoing discussion in the geekier-apis google group.

WhenAUser Becomes


We’ve been busy this summer, but not busy blogging. We owe you an update. is our new home. Whether you’ve known us as or, is now the place for all our products and technology.

You’ve been asking us to do a few things: make it easier to get started with our technology, and offer more value out the box. We’re learning we can do this best by focusing on specific developer platforms, and we are excited to announce the availability of the rulesio gem for Ruby web app developers as our first step in this direction. It only takes minutes to add our gem to your application, and it provides immediate access to our entire catalog of solutions.

Along with the Ruby gem we are launching three initial solutions: Exception Reporting, Bad User Experience Reporting, and User Segmentation. These solutions cover core use cases explored by users of our private beta. Each is implemented as a combination of Rules and Workflows/Funnels in our rules engine – they are out-of-the-box ready, but can then be modified and customized in very interesting ways. Future articles will describe some of the possibilities in much more detail, so stay tuned.


EyeEm, WhenADemo

On Wednesday we had a great evening at Berlin Tech Meetup where I gave a demo of what we do and a short talk on challenges we’re facing. More on this plus a video of it after the weekend.

For now, here’s a fun part we had planned with our friends at EyeEm for the meetup that didn’t really work out, because of an overloaded wifi.

This is how it works:

This is how we did it:

Together with EyeEm we turned the photo updates of one of their albums into an event stream, WhenAUser will understand. Then we created a rule that for every photo update event, uses the photo URL and mustachifies it with (which is powered by and upload it back to EyeEm. We also title it ‘mustachified!’.

This is why we did it:

Because we can, that’s why!


“With great power comes great responsibility. ”

Automated retention messages can be fantastic for user loyalty leading to greater revenue… but they can also be abused. With WhenAUser we can help you make sure that your messages reach only the right people at the right time without spamming the user. 

Berlin Meets Boulder


Today is the first day of GlueCon, the web application integration conference in Boulder and our team of has the honor of representing Berlin there! was selected out of hundreds of applicants as one of only twelve companies to demo their products in the DemoPod competition. Check out our GlueCon Demo Funnel to see what’s going on and become part of the expericence.

If what we do is exciting to you, if you want to support the Berlin Startup community, if you feel like a young Berlin tech company definitely deserves to win the DemoPod competition, support us with your vote, by texting ‘8’ to +1 (484) 652 8683 (powered by Twilio)

Vote for WhenAUser


Support a young Berlin tech company in the GlueCon DemoPod competition, by texting ‘8’ to +1 (484) 652 8683

(powered by Twilio)

Event Sourcing Use Cases Part 1

event sourcing, events

Post Mortem

This is the first use case I want to talk about.

Say, something went wrong in your app and as a result, a user gets blocked for abuse but that user claims to not have done anything to break rules. Now you can go back to your staging environment and play the events leading up to the block in that system and see what happens first hand.

What’s needed for this is a comprehensive log of events for basically any interaction that happens around your app and a staging system to replay events to. Based on the sensitivity of your data you may want to anonymize some details before hand but that’s about it.

Testing Bug Fixes

This one is somewhat similar to a Post Mortem in that you take data out of the live system’s event source.

When a bug occurs, you can just grab a record of the last events that led up to the incident and extract those. Once the bug is fixed and deployed to a testing or staging environment, you can replay that portion of events and see if the bug shows up again. It’s even possible to collect several sets of events and run all of those.


When you have an event log and want to move the app environment to a new system, you can just replay the whole log there and it will be up to date with the old system. This can even be done in a multi master setup where both servers keep each other in sync, until the transition is complete and you want to shut down the old server.

I hope this gives you some ideas, what can be done with even sourcing. More on this soon.

Events and Event Sourcing

event sourcing, events

What’s event sourcing?

We talked about events and what it means to record them. Once you record events, you can also use this event log as a source for data. You can reconstruct the state of your application at a certain point in time, undo changes, insert things into the timeline and see how it affects later states or pick specific events and run them again.

There are lots of use cases for event sources and today I want to tslk about one you likely already are familiar with, but might not have expected to belong to this topic: source code management.

SCMs like GIT manage a list of file changesets (events) that are applied in a certain order to form the current state of the working copy. You can go back in time, fork the repo, insert changes, undo commits and cherry pick changes from history or other branches.

So, what about it?

Now I ask you to keep in mind this picture of a repository being a timeline and think about how often you go back, review changes, undo stuff that happened and perhaps even inserted changes to alter the history of what you’re looking at right now. Doesn’t this feel like time travel?

We all know the great responsibility that comes with the power of this kind of event stream, but also the possibilities that go with it. Those possibilities could be transferred to your use of data, if you made use of event sourcing for this.

I will talk about several of those use cases in the future so stay tuned, if you’re curious.

We ♥ Event Stream Processing


For the technical team, event streams are at the heart of what we do here at Now until recently, event stream processing was a rather esoteric topic, seemingly not having a great deal to offer the average web developer. So why do we find this so interesting?

You may have noticed that event streams are playing an increasingly important role in the architecture of major web platforms, such as Facebook, Twitter, LinkedIn. For these big applications, the motivating use cases for event stream processing are advanced features such as detecting fraud, identifying trending topics, and generating personalized recommendations. 

Most SaaS product teams aren’t building something on the scale of Facebook, or LinkedIn, but event streams can be useful for almost any web application. To demonstrate why, here are some simple examples (while we use JSON for all of our events, I wrote these examples in pseudo-English to make them easier to follow):

customer47 viewed product12
customer47 added product12 to shopping cart
customer47 viewed check-out page
customer47 viewed shipping-charges page
... 1 day later with no payment recorded ...
customer47 has abandoned their shopping cart created an account logged-in followed user123 followed user456
... after following many more in a short period of time ... unfollowed user123 unfollowed user456
... after unfollowing many more in a short period of time ... is a suspicious character

Both of these examples involve a simple pattern of behavior, leading to a conclusion with important implications for your business. We’re building WhenAUser to help you not just detect, but also to take action when these things occur.