iOS Architecture: Separating logic from effects

Luis Recuenco
Job&Talent Engineering
21 min readSep 11, 2018

--

This article is the second in a three-part series. You can find the first one here and the third, and final one here.

A year with a State Container based approach

It’s been almost a year since I last wrote about the new architecture we implemented in the iOS team at Jobandtalent. It was an architecture that tried to take advantage of some of the best features of Swift (value types, generics and sum types to name a few), while embracing some unidirectional data flow fundamentals.

The results have been incredibly positive so far. Our code ended up being much simpler and easy to reason about than it used to be. We think about state modelling in a totally different way, avoiding code with the two main problems I explained in detail in my previous article:

  • The problem about multiple, scattered, out-of-sync outputs.
  • The problem about exclusive states.

This was possible due to a single, unified structure modelling the state of a specific domain and due to the fact that Swift has a rich type system with Algebraic Data Types (ADTs from now on) that made us model our domain in a better way.

Still, we had a big underlying problem. A problem that was hard to foresee at first. And that’s the problem of coupled logic and effects. This article will describe the evolution of our previous architecture into a better approach by moving the logic to the value layer (pure part) and making the reference types layer (impure part) handle effects and coordination between objects.

But before I get into that, I’d like to establish some fundamentals and concerns about the State Container Architecture.

Fundamentals

What sum types are all about

Many iOS and Swift developers aren’t really familiar with the concept of sum types (also called tagged unions, variant types or more formally, coproduct types). Swift exposes us to the concept of enum with associated values, and that’s perfectly fine. The real problem comes when we cannot explain what the real advantage of that enum with associated values is. And when we don’t understand what’s the real purpose of the type, it’s very difficult for us to use it appropriately. So, here it is:

The Sum Type makes illegal states impossible

It’s that simple.

I guess the next question is what’s an illegal state? For that, let’s imagine a simple callback function like this (Data?, Error?)-> Void. These types of callbacks were extremely common back in the old Objective-C days. Which are the possible values of the tuple (product type) (Data?, Error?) ?

(Data, Error) -> Not valid
(Data, nil) -> Valid
(nil, Error) -> Valid
(nil, nil) -> Not valid

As you can see, there’s only two legal states, but, as (Data?, Error?) is a product type, we have exactly 2 * 2 possible combinations for the values inside the type. We usually have to ways of solving this:

  • Convention: We check whether we have a valid error first. If that’s the case, we don’t care about the data.
  • Testing: Making sure none of the code paths produce the invalid states.

I don’t want convention for this. I don’t even want tests for this. I want a rich algebraic type system that prevents me from doing this in the first place. There’s no better way to protect you from illegal code that the fact that you cannot write it.

So, what’s the proper way to model this behaviour? You guessed it, a sum type:

enum Result<T, E: Error> {
case success(T)
case failure(E)
}

There is simply no way of writing the previous valid data and valid error case. Swift is preventing us from making illegal states possible. And that’s the real magic of sum types. Once you start modelling your state using sum types, it’s really hard to go back to languages without them. It’s the first thing I miss when I have to touch old Objective-C code.

And in case you want to make the error case an impossible state, that’s what bottom types are for. In Swift we have Never, but you can easily create your own, more semantic version by simply having an empty enum.

enum NoError: Error {}
let impossible = Result<Data, NoError>.failure(???) // Cannot create

Sum types and the Rule of Representation

I’ve always loved Unix philosophy. Small and simple programs that make one thing well and that are able to compose and cooperate nicely to solve bigger problems (that’s what FP is all about with functions or OOP with objects…). But there’s a rule that really made me think the first time, and that’s the Rule of Representation.

Fold knowledge into data so program logic can be stupid and robust

That’s exactly what proper, rich ADTs make me feel. They hide all the complexity so I don’t need to have all that nasty logic that prevents the illegal states and makes my code complex and hard to reason about.

By the way, chapter 9 of The Mythical Man-Month stated this already in 1975.

Show me your flowchart and conceal your tables, and I shall continue to be mystified. Show me your tables, and I won’t usually need your flowchart; it’ll be obvious

Sum types and the Open Closed Principle (OCP)

Let’s consider a simple app where we have to draw geometric figures like squares or circles. In a more OOP-ish approach, we would create an interface or base class Figure with a draw method. Then, the implementations Square and Circle would implement that draw method providing the appropriate behaviour. In a more FP-ish approach, we would create a sum type Square | Circle with a method draw. What are the advantages and disadvantages of each approach?

  • OOP approach: we can easily add new figures by creating new classes that implement the abstraction. Unfortunately, adding new specific operations (behaviours) is difficult, as you’d have to implement the new methods in all the different classes.
  • FP approach: adding figures is more difficult, as you would need to change all the different methods that accept a figure to handle the new case. Fortunately, adding new methods and behaviours is a simple as extending the type creating a new method.

The OOP approach complies with OCP. The second doesn’t. Does this mean it’s a bad choice? Well, not all code that complies with OCP is necessarily better code. You might end up in a situation where it’s not so common to add new figures, but more behaviours… But, can we have the best of both worlds? Can we have most of your code comply with OCP while using sum types. The answer is yes. Let’s take a look at a simple example.

enum State {
case loading
case loaded(Data)
}
class View {
func render(state: State) {
switch state {
case .loading:
loadingView.isHidden = false
tableView.isHidden = true
case .loaded(let data):
loadingView.isHidden = true
tableView.isHidden = false
tableView.render(with: data)
}
}
}

Now, imagine we add a new error case to the enum. The good news is that the code won’t compile until we handle the error case. The bad news is that there might be a lot of places that need fixing. In order to comply with OCP and avoid the impact that a new case can have in our code, the simplest solution is to abstract the functions that destructure the State type (what we called queries in the previous article).

extension State {
var data: Data? {
switch self {
case .loading: return nil
case .loaded(let data): return data
}
}
var isLoading: Bool {
switch self {
case .loading: return true
case .loaded: return false
}
}
}
class View {
func render(state: State) {
loadingView.isHidden = !state.isLoading
tableView.isHidden = state.isLoading
tableView.render(with: state.data)
}
}

Now, in case we need to add the error case, we only have to update a few local functions that are the ones used all across the code base where state information is needed. That way, we guarantee that all clients of our State object comply with OCP.

This way of having client code protected from our state shape not only makes our code more robust for future changes, but it also helps us avoid problems related to view reuse. By modelling your view state as a sum type, your view will be reused for the different states (loading state, loaded state, etc). It’s quite easy to forget setting a subview in any of the cases (you hide a view in the loading state but forget to show it back in the loaded state). Using queries guarantees that we always set the appropriate piece of state for each one of the views that we need to handle.

All this is very related to the Expression Problem and the extensibility in two dimensions. In case you want to know more, I recommend Brandon Kase’s talk, Finally Solving the Expression Problem.

What value types are all about

Value types are all about inert data, immutability, and about equivalence. They don’t base the concept of equality on identity, rather, on their underlying value.

Swift is the first language that really exposed me to value types. There are a lot of things that make value types great. Their memory is handled in the stack and they lack some of the overhead needed by reference types to handle identity or heap allocation (reference count, etc). But the real big deal is that they are copied on assignment (with optional Copy-On-Write (COW) semantics). What does this mean? This simply means that, by using value types, we can avoid the implicit coupling we have when using reference types (aliasing bugs). Being able to reason about code in a way that we can be sure it cannot be modified by other parts of the code base makes a huge difference. This also makes value types perfect for multithreading environment as there’s no need for synchronization primitives.

Swift makes it extremely convenient to handle the immutability of value types via the mutating word. Immutability has great advantages, but it can make our code, and ultimately our architecture, a little bit cumbersome to use. By using mutating, we maintain the immutability semantics and have the convenience of being able to mutate the value type in place by letting Swift create the new value and assign it back to the very same variable. It’s the best of both worlds.

And finally, value types have enormous advantages in testing. They lead to easy data in, data out testing without further ceremony (no mocks whatsoever).

Not everything is great about value types though. Mutating big value types is not as performant as it should be due to the lack of persistent data structures. But I’m confident we’ll have them in future Swift versions, hopefully.

Separating logic from effects

It all started with testing

I’m a firm believer that testing gives you quite an accurate idea about the quality of your code. But I’m not talking about the number of tests or your test coverage, I’m talking about how easily you can test your code. If your code is simple and easy to test, that’s definitely a symptom of good design. Unfortunately, Swift, and the lack of some reflection capabilities (which most of the mocking libraries depend upon), hinder this task.

Let’s imagine a simple app where we have a view model, which depends on a service object, which depends on an API client. We want to test the view model in isolation. Let’s also suppose we use dependency injection so we can pass any test doubles we want. Ideally, we would do something like this:

let serviceMock = Mock(Service.self)
let sut = ViewModel(service: serviceMock)
expect(serviceMock, #selector(downloadData))
sut.fetchData()serviceMock.verify()

The mock made it very easy to cut the dependency graph and let us test the outgoing command (service.downloadData) without asserting against the real side effect outcome (the actual state change due to the data download).

With Swift, the environment for the test gets much more complicated than it used to be.

class ServiceMock: Service {
private(set) var downloadDataCalled = false

init() {
// Provide all the dependencies the APIClient depends upon
let falseAPIClient = APIClient(…)

super.init(apiClient: falseAPIClient)
}

func downloadData() {
downloadDataCalled = true
}
}
let serviceMock = ServiceMock()
let sut = ViewModel(service: serviceMock)
sut.fetchData()
XCTAssertTrue(serviceMock.downloadDataCalled)

As you can see, we had to subclass Service to create the spy. And the real issue is that, in order to create the mock, we need to supply all the appropriate dependencies. In this case, it’s only APIClient, but it could be much worse. And of course you need to create that false APIClient providing all its dependencies… and so on and so forth.

Another problem with the subclass and override approach is that you have to be sure to maintain the invariants of the base class. Forget to call super in any of the overridden methods and prepare yourself to have green tests even if your real code is failing miserably.

This makes it incredibly cumbersome and uncomfortable to test some code. There are some options like creating interfaces for the sole purpose of mocking, but we don’t think this explosion of polymorphic interfaces makes the design any better. There are of course places where it makes a lot of sense to abstract ourselves from the outside world (database implementations for instance) via an interface, which let us design by contract and use techniques like Outside-in TDD. You can start designing with your high level contracts without actually deciding which lower level pieces will fulfil those contracts (you can put off your actual persistence implementation choice for instance). If you want to automate the creation of mocks for this very cases, Sourcery is a great tool for that.

There’s also the option to subclass NSObject if you prefer, but we’ll stick to pure swift reference types for the rest of the article.

Our current solution is to have a simple composition root approach where we can inject some test doubles for specific parts of the graph without supplying the rest of the dependencies. This eases some of the pain and we might talk about this in a future post.

Just to be clear, I’m not advocating the use of mocks in testing, but I do think they are very valuable for some cases. They let us test in isolation and more importantly, they make obvious the bad code when you need to mock six dependencies and override quite a few methods to be able to exercise you subject under test. They tend to be coupled quite often with implementation details, thus, making the tests break when those details are changed, without actually helping catch real issues.

Also, there are some people that think that not testing against the real thing is not valuable at all. Some of us have had bad experiences with false positives in the past, where the problem happened even if one of the tests should have caught the issue in the first place.

Integration tests are not really the answer to our problems though. Even if they might seem like a good solution to our mocking issues, the number of integration tests you need to cover all the important code paths grows exponentially instead of linearly with isolated tests. To know more, I highly recommend J. B. Rainsberger’s talk: Integrated Tests are a scam.

Not everything in Swift made testing code harder. In fact, the aforementioned Rule of Representation and the rich ADTs make it possible to skip some tests that are mandatory in some other languages. Testing that some input is in a valid range is something that can be easily modelled by a proper type for instance.

I’d like to quote a sentence from Uncle Bob’s article The Dark Path.

Now, ask yourself why these defects happen too often. If your answer is that our languages don’t prevent them, then I strongly suggest that you quit your job and never think about being a programmer again; because defects are never the fault of our languages. Defects are the fault of programmers. It is programmers who create defects — not languages.

And what is it that programmers are supposed to do to prevent defects? I’ll give you one guess. Here are some hints. It’s a verb. It starts with a “T”. Yeah. You got it. TEST!

While there’s some truth in that sentence, it’s also true that some languages make it easier to commit more mistakes than others. Tests are definitely the first tool to prevent mistakes in software, but we shouldn’t forget about types and static analysis tools, which make a big number of the typical dynamically typed languages tests obsolete.

So… Interfaces are not the solution. Integrations tests are not the solution either. Which is the solution then? In my experience, code that is not easily tested ends up not being tested at all, at least not all the proper code paths. So, we asked ourselves what we could do to make the most out of our testing suite and be able to test most of the important business rules without suffering on the way. That’s when we discovered the problem about coupled logic and effects.

Coupled logic and effects

Let’s take a look at the following code.

class Store {
func fetchData() {
guard !state.isLoading else { return } // logic
service.downloadData() // effect
}
}

That code is really simple, on purpose, but perfectly shows the problem about entangled logic and effects. Let’s imagine we want to test that code. It seems sensible to write these two tests:

  • Given state.isLoading = false, service receives the method downloadData when calling fetchData.
  • Given state.isLoading = true, service does not receive the method downloadData after calling fetchData.

In order to do this, we need to build the proper infrastructure for the test, that is, setting both the correct state and the service spy to double check that the messages are being sent correctly. This seems like an easy task when we just have two tests, but imagine a real life example, where the logic is much more complex and the effect can be triggered under different circumstances.

So, what’s the impact of logic and effects in terms of testing?

  • Logic is responsible for the different code paths. The more logic we have, the more code paths we need to test.
  • Effects makes our code difficult to test, as they require testing infrastructure (test doubles).

What happens when we have logic and effects together? We certainly have the worst of both worlds, a lot of difficult code to test.

Let’s try a different approach. How about moving the logic that is responsible for an effect to a value layer that doesn’t trigger the effect but rather, models the forthcoming effect via a value type? That layer will only be responsible for performing the logic and deciding if the effect should be triggered. Then, someone else will be responsible for taking that effect value object and actually performing it (network request, data base access, IO, etc).

struct State {
enum Effect {
case downloadData
}
func fetchData() -> Effect? {
guard !isLoading else { return nil } // logic
return .downloadData // Effect representation
}
}
class EffectHandler {
func handle(effect: Effect) {
switch effect {
case .downloadData:
service.downloadData() // actual effect
}
}
}

This separation is much more profound that it may seem at first. We’ve separated the what from the how. The decision making from the actual decision. The algebra from the interpreter. We’ve made the value, the boundary.

But let’s go back to our testing example. Now, no matter how complex the logic is, the tests are extremely simple. We only need to create the correct state model (given) and assert that the fetchData (when) result is the correct Effect (then).

XCTAssertEquals(State(isLoading: false).fetchData(), .downloadData)
XCTAssertNil(State(isLoading: true).fetchData())

No need for cumbersome infrastructure with test doubles. No need for difficult tests that grow whenever our logic grows. When our logic grows and our effect has to be triggered under different circumstances, we only need to write new dumb data in, data out tests checking that the Effect is the one I expect.

Sure, we still have to test that the effect handler actually handles the effects correctly, and that’s where we’d have to use test doubles, but we only have to do that once. Compare that with our previous example where the logic and the effects where together. In fact, as we’ve dramatically reduced the code paths in our imperative shell layer, we could easily use integration testing without worrying about the number of tests we’d need to cover the appropriate code paths. This would avoid the need to create cumbersome test doubles.

We have also made our testing code more robust. Now, if our mock implementation breaks, we only break the only test that handles the mock interaction, the collaboration test we have for the class EffectHandler. In our previous implementation, every single test that dealt with the effect would have broken.

Our new game has two simple rules:

  • Move as much logic as possible to the value layer. This is where our business rules are. This is where isolated testing is easy. This is our pure part, our functional core.
  • Model effects as inert value objects that don’t behave and have a thin layer of objects handling effects and coordination on top. This is where we download the data. This is where we save data to disk. This is where we handle I/O. This is where we can afford integration testing. This is our impure part, our imperative shell.

The concept of functional core, imperative shell is not new. In fact, it dates back to 2012, when Gary Bernhardt’s talked about it in Boundaries. This is one of the best talks I’ve ever watched and a lot of the concepts in this article are heavily inspired by it.

From a State Container based approach to an Effects Handler based approach

Architecture changes are always quite difficult for brownfield code bases. You have to be quite careful about the amount of new things you bring to the table. It’s important to analyze all the trade-offs and always try to have something incremental with fewer impact in developer productivity and noticeable improvements. This is easier said than done, of course.

The State Container based approach we took a year ago was exactly the right trade-off. It contained just the few amount of new things so people didn’t feel overwhelmed by the change. It was an incremental update to the traditional MVVM approach we were already using. Now it’s exactly the right time to bring some new ideas, like modelling events and effects as sum types and have some other deeper constraints to force us have clear separation of logic and effects.

Let’s now talk about the main components of the architecture.

State

The state is arguably the most important part of the architecture, our functional core. The consumption of the state follows the same rules as the previous architecture. We use the concept of queries to make the state clients agnostic of their internals and comply with OCP, like we previously explained. As for mutation, we previously had several mutating methods, that we called commands. Now we have a public and unified way to change the state, the mutating handle(event:) function, which returns an optional Effect. This is one of the most important changes. The State will not only be responsible for deciding the next state based on the event, but also what happens next (effect). This was inspired by how the Elm language handles side effects.

The state definition looks like this.

protocol State: Equatable {
associatedtype Event
associatedtype Effect
mutating func handle(event: Event) -> Effect?
static var initialState: Self { get }
}

To get a clear understanding, let’s consider a simple app that downloads all the recruitment processes where a candidate can apply. The state will look like this:

struct ProcessState {
case idle
case loading
case loaded([Process])
case error(String)

static var initialState: ProcessState {
return .idle
}

enum Event: Equatable {
case fetchProcesses
case load(processes: [Process])
case userDidLogOut
}
enum Effect: Equatable {
case downloadProcesses
}
mutating func handle(event: Event) -> Effect? {
switch (self, event) {
case (.loading, .fetchProcesses):
fatalError()
case (_, .fetchProcesses):
self = .loading
return .downloadProcesses

case (_, .load(let processes)):
self = .loaded(processes)
case (_, .userDidLogOut):
self = .idle
}
return nil
}
}

As you can see, when we send the event fetchProcesses, the state changes to the loading state and returns the effect that needs to be triggered next, downloadProcesses. The responsible for handling this effect is the next piece of the puzzle, the effect handler. But before we get to that, let’s talk about events.

Using a sum type to model messages to objects might not be the most idiomatic thing at first. We are used to modelling messages by using methods, but they have some nice advantages that make them a welcome addition.

  • Common API. Where we previously had different mutating methods to change the state, we now have a single method for that.
  • Thanks to the IDE autocompletion, we can easily see all the different events that will produce state change by writing state.handle(event: .|), where | represents the caret.
  • The event represents the future state change. It’s not actually performing any state change. It’s the same idea behind modelling effects as value objects.
  • Events can be logged or serialized.
  • Events, modelled via sum types, allows us to leverage optics and prisms to compose different states in a simple way.

We are not opinionated about how you should structure your state. You might find interesting to have a single source of truth structure (like Redux) or multiple structures (like Flux). Having a single structure makes some things easier, like avoiding inconsistent states across different nodes of your state tree, but it might not be easy to adopt in your current application. Having several stateful pieces is much easier to adopt for brownfield projects, but you need to take care of synchronisation between dependencies around those stateful pieces. Choose wisely.

Effect Handler

An effect handler will be responsible for handling the different effects associated with a specific state shape. The definition looks like this.

protocol EffectHandler: class {
associatedtype S: State
func handle(effect: S.Effect) -> Future<S.Event>?
}

As you can see, the effect handler is not only responsible for the effect, but also for providing the optional event that will be sent back to the state.

The real effect handler will look like this.

class ProcessEffects: EffectHandler {
private let service: ProcessService

init(service: ProcessService) {
self.service = service
}

func handle(effect: ProcessState.Effect) -> Future<ProcessState.Event>? {
switch effect {
case .downloadProcesses:
return service.fetchProcesses()
.map { ProcessState.Event.load(processes: $0) }
}

}
}

Event Source

State and effects handlers are not enough. We need something listening to our different stateful pieces that eventually need to update the state. That’s what an event source is for. The definition looks like this.

protocol EventSource: class {
associatedtype S: State
func configureEventSource(dispatch: @escaping (S.Event) -> Void)
}

Let’s imagine that we have a store handling our current session state and we might need to clear all our processes when the user logs out.

class ProcessEvents: EventSource {
private let sessionStore: Store<SessionState>
private var token: SubscriptionToken!

init(sessionStore: Store<SessionState>) {
self.sessionStore = sessionStore
}
func configureEventSource(dispatch: @escaping (ProcessState.Event) -> Void) {
token = sessionStore.subscribe { state in
guard !state.sessionValid else { return }
dispatch(.userDidLogOut)
}

}
}

As you can see, we purposely only provided the function S.Event -> Void in the configureEventSource function. This avoids any access to the underlying state shape, forcing us to send the event with the appropriate data and move the logic to the value layer.

We are almost there… There’s only one missing piece… The piece that glues everything together, the Store.

Store

The store is the missing fundamental piece of the puzzle and it’s responsible for quite a few things:

  • It wraps the state value object and allows others to know when it changes (subscription handling).
  • It coordinates state and effect handler to produce the new state and execute the different associated effects.
  • It provides the event source with the function to use to send events.
  • It’s the façade for view controllers and other objects to send events to the state.
  • It handles memory management of effect handler and event source.

The implementation is the most complex one. It uses type erasure to handle communication with the effect handler and event source. For simplicity, I will omit all the type erasure and state subscription, which is exactly the same code as in the previous article.

final class Store<S: State> {
private let effectHandler: AnyEffectHandler<S>
private let eventSource: AnyEventSource<S>
init<EH: EffectHandler,
ES: EventSource>(effectHandler: EH, eventSource: ES)
where EH.S == S, ES.S == S {
self.effectHandler = AnyEffectHandler.init(effectHandler)
self.eventSource = AnyEventSource.init(eventSource)
self.eventSource.configureEventSource {
[unowned self] in self.dispatch(event: $0)
}

}
var state: S { … } func subscribe(_ block: @escaping (S) -> Void) -> SubscriptionToken { … } @discardableResult
func dispatch(event: S.Event) -> Future<S> {
let effect = state.handle(event: event)
let currentStateFuture = Future(value: self.state)
let effectFuture = effect.flatMap {
effectHandler.handle(effect: $0)
}
let nextEventFuture = effectFuture.flatMap {
self.dispatch(event: $0)
}
return nextEventFuture.map { future in
currentStateFuture.flatMap { _ in future }
} ?? currentStateFuture
}

}

The most interesting part is the dispatch function, which is responsible for the state/effect loop. It processes an incoming event, producing a new state and an optional effect. Next, we handle that effect calling dispatch concurrently until we finish the chain and produce the final state. The use of futures is optional, but it’s handy to know when a specific event has finished.

Finally, it is meant to be used like this.

// Create the store with the effects handler and event source
let effects = ProcessEffects(service: …)
let events = ProcessEvents(sessionStore: …)
let store = Store<ProcessState>(effectHandler: effects,
eventSource: events)
// start dispatching events to it
store.dispatch(.fetchProcesses)

The whole picture

Let’s finally have a quick recap of all the concepts a some clarifying diagrams to see how everything fits together.

  • State: Your logic lives here. Your data lives here. This is where you have your business rules and decision making, as well as deciding which effects should be triggered after the different state changes and events.
  • Effect Handler: Responsible for deciding the actions to take for any of the effects.
  • Event Source: Responsible for listening to different stateful pieces and send events to change the state accordingly.
  • Store: The piece that ties everything together. The external façade to the system. The UI talks to it to send events and change state.

Conclusion

I’m quite pragmatic about architectures. As I said in my previous article, there’s no such concept as best architecture.

The same way some languages make you commit less mistakes than others, I do think that some architectures make you be able to change your code more easily over time. And remember, one architecture is better than other if it lets you change your code more easily over time. Unfortunately, there is no silver bullet for that.

  • You can choose to have a lot of rules and obstacles about how you should structure your code. This will make developers ship features more slowly and will ultimately be less happy.
  • You can choose to have a more flexible architecture, trusting on the good judgement of people to do the right thing, with some shallow guidelines about the structure of the code base. This will likely lead to inconsistencies and code that it’s not very easy to maintain in the long run.

The ideal architecture is the one that makes our code evolve healthy over time without redundant overhead.

Over the lasts years, the iOS community has taken a lot of ideas from other areas. React and Flux popularised the unidirectional approach. Also, functional programming is gaining a lot of traction and popularity lately. Unfortunately, we sometimes fail to consider the imperative context where iOS lives, and try to force ourselves to use some techniques than aren’t simply idiomatic enough for our platform.

And that was our goal from the beginning, create an architecture that was inspired by a lot of different languages or paradigms and feels good to use in iOS without feeling alienated.

This is only the beginning, but I think we’ve made a pretty good start.

We are hiring!

If you want to know more about how is work at Jobandtalent you can read the first impressions of some of our teammates on this blog post or visit our twitter.

Thanks Ruben Mendez and Victor Baro for all the valuable feedback while developing the architecture.

--

--

Staff Engineer at @onuniverse. Former iOS at @jobandtalent_es, @plex and @tuenti. Creator of @ishowsapp.