WebAssembly Summit 2020 Recap

Organizers and Speakers đź‘Ź

Website: WebAssembly Summit

Date: Feb 10, 2020

Recording: https://www.youtube.com/watch?v=WZp0sPDvWfw

The event was the first WebAssembly conference. It had a single track and happened on a single day. A lot of contributors/early adopters were in the room and the topics were geared toward them. I went there to find interesting use cases that could be applied to my day-to-day work but I wasn’t able to achieve that.

Here are my notes from the conference. I am no expert with WebAssembly so my notes may be incorrect:

Opening Keynote: WebAssembly: Building a new kind of ecosystem – Lin Clark

  • WebAssembly on Web is portable and secure.
  • WebAssembly on Server may lose security if we are not careful.
  • Currently in Node.js ecosystem, there is no sandbox to secure the system from 3rd party codes
    • Malicious code
      • Example: electron-native-notifier. A Bait and Switch scheme to steal bitcoins
        • Memory access
      • The number of malicious code has doubled from 2017 to 2019
    • Vulnerable code
  • Spinning up a new process as OS does for a library is too expensive
    • Memory issue
    • IPC is hard to deal with
  • Solution (WebAssembly Nanoprocesses):
    • Sandbox
    • Memory model (memory isolation)
    • Interface types: copy from an isolated memory to another memory to pass the data
    • WebAssembly System Interface (capability based security)
    • The missing link
      • How to pass the said capability to the dependencies
  • Nanoprocess isn’t a standard yet. just a convention. ByteCode Alliance works to provide a secure foundation.

References

Shipping Tiny WebAssembly Builds – Alon Zakai

WebAssembly is usually smaller than Javascript because

  • dead code elimination
  • binary format

Has a risk tho:

  • big runtime lib requirements

Tip 1: Compression: GZip or Brotli

Tip 2: wasm-opt will generate a smaller wasm from a wasm via:

  • dead code elimination
  • constant propagation
  • inlining
  • since wasm-opt is running at Link Time Optimization
  • wasm-opt will be run by some tool sets out of the box

Tip 3: size profiling

  • Bloaty
  • Twiggy
  • wasm-opt’s —func-metrics

There is a tradeoff between writing a idiomatic code vs. the smaller binary output

C/C++ Tip (a lot of specific tips):

Rust Tip:

Go Tip:

  • TinyGo vs. the regular runtime size differences

References

Why the #wasmsummit Website isn’t written in Wasm, and what that means for the future of Wasm – Ashley Williams

WebAssembly shouldn’t replace Javascript

How are values prioritized for WebAssembly? The community does not have an explicitly shared vision. We should make it easier for people to use WebAssembly.

Empowering people!

What does WebAssembly want?

  • Marketing does not speak the language of the people it wants to reach

Rust, C++, Javascript, Academia, New Developers all get together

What do people want from WebAssembly?

  • multilanguage support
    • Why? JS doesn’t meet my need
      • Why? performance inconsistencies / don’t understand/like it

What do the above numbers mean?

  1. Javascript has an unwilling monopoly.
  2. Performance is not as large of a concern as you would respect.
  3. A lot of people haven’t tried WASM yet.

History of Programming Language Development

  • 1995 → High-level abstractions (Ruby, Javascript, Java)
  • 2010 → Low-level languages (Go, Rust, WASM)

The demand for speed of computation on Web is growing since Web is the most powerful distribution channel.

Excel and Flash were hugely empowering technologies. WebAssembly should strive to be the same.

References

JavaScriptCore’s new WebAssembly interpreter – Tadeu Zagallo

I didn’t take much note here since this article (https://webkit.org/blog/9329/) can represent the talk very well.

WebAssembly Music – Peter Salomonsen

A talk on generating midi music with WebAssembly implementation. Javascript to write songs and WebAssembly to generate sound. He was able to create an executable out of the Javascript to play the song via terminal.

References

WebAssembly and Internet of Things – Jonathan Beri

What is IoT?

  • Embedded systems
    • Constrained
      • Limited processing power and battery
      • Limited connectivity

In 2017, a runtime for embedded systems did not exist.

A unikernel is a specialized OS for a single application.

Unikraft works to enable building unikernels with ease. Unikraft

WebAssembly on Arduino is now possible thanks to WAMR and WASM3

References

Building a Containerless Future with WebAssembly – Kevin Hoffman

Low-level runtimes: https://github.com/appcypher/awesome-wasm-runtimes

Mid-level runtimes: waPC https://medium.com/@KevinHoffman/introducing-wapc-dc9d8b0c2223 & wasCAP https://docs.rs/wascap/0.3.0/wascap/

High-level runtimes: waSCC https://wascc.dev/

Since webassembly is without container, its code can be updated on the fly without rebooting (dynamically bound).

My Q: Why waSCC is needed when you have the sandbox?

WebAssembly as a <video> polyfill – Brion Vibber

Limitations to wikipedia’s tech stack due to its philosophy. It could only use open/public licenses

Over time, they optimized the polyfill with:

  1. Javascript to WebAssembly
  2. Threaded build
  3. SIMD

References

Closing Keynote: WebAssembly: Expanding the PIE – Ben Smith

This talk felt like a recap of the community for the last 4-5 years. Incrementalism in WebAssembly community enable moving forward at a steady pace

Sep 2015

  • ml-proto (ocaml based) ⇒ WebAssembly reference interpreter
  • v8-native-prototype (binary format) ⇒ becomes WebAssembly binary format
  • a two weeks bet to translate ml-proto to v8-native-prototype: sexpr-wasm ⇒ sexpr-wasm-prototype ⇒ wabt

A modest goal at the time: C++ compile to WebAssembly. WA and JS interop.

APIE has been expanding since its inception:

  • Ability
  • Producer
  • Interop
  • Embedder

Sep 2017

Notable proposals: Garbage Collection, Host bindings

Jun 2018

Notable proposals: Garbage Collection, Reference Types (later became Interface Types), Wasm C API

Aug 2019

Notable proposals: Typed Function References, Type Imports, WASI, Interface Types

Feb 2020,

Notable proposals: Reference Types (Phase 4), GC, Wasm C API, WASI, Interface Types

My Review of GraphQL Summit 2019

I have been to the GraphQL Summit 2019 hosted by Apollo in San Francisco. I was very excited since GraphQL is the technology I use and learn about every day. Overall, it was an excellent opportunity to see the increasing penetration and impact of GraphQL. Speakers frequently quoted this number from the npm survey: 23% of Javascript developers are using GraphQL. Naturally, many talks focused on scaling GraphQL from large companies such as Shopify, Paypal, and Expedia. The technology is not just for greenfield projects or startups anymore. But, their mobile talk lineup was relatively weak, possibly indicating the immaturity of GraphQL on mobile.

Talks were primarily divided into two categories: client-side and server-side. I mostly went to the client-side ones. Of those I went to, I enjoyed Fine-Tuning Apollo Client Caching for Your Data Graph, Client-side GraphQL at scale, and, The GraphQL developer experience the most. The followings are my notes on the talks I attended. I hope the notes guide you to find something interesting.


Day 1

The GraphQL developer experience by Danielle Man (đź‘Ť)

From the start, Danielle made a good point about the real benefit of GraphQL. It’s not just about minimizing payload or reducing round trips. It’s about the productivity boost from the integrated experience with typed API, normalization, and intelligent caching. React, Prettier, and VS Code solved the challenges of component structure, formatting, and type intelligence. Now GraphQL solves the next big problem, data fetching. I like that she went into the whys of GraphQL and also gave an end-to-end view of the tooling. I recommend it to those whose GraphQL journey is just starting.

State Management in GraphQL using React Hooks & Apollo by Shruti Kapoor

I was a little disappointed with Shruti’s talk since I didn’t find it that relevant to GraphQL. As she focused mostly on React hooks, this is your talk if you aren’t familiar with hooks.

Fine-Tuning Apollo Client Caching for Your Data Graph by Ben Newman (đź‘Ť)

Ben talked about the new features in the upcoming Apollo Client 3. I found the material very relevant because my team is already seeing a huge performance bottleneck from Apollo Cache. There were several exciting features: Garbage collection, declarative cache config (though it doesn’t statically check the config yet), and improved pagination handling. Since most of the features are about performance, the talk is meaningful for those using Apollo Client at scale already.

Scaling GraphQL Beyond a Backend for Frontend by Michelle Garrett

As a frontend developer, it can be frustrating trying to adopt GraphQL since you find yourself dependent on backend counterparts. Michelle talks about how you can go around the inertia by using a GraphQL middleware (or BFF). Though I believe client-side resolver is the lighter weight approach, it was inspiring to see her org, eventually turning around thanks to the superior developer experience. She then continues to talk about her plan to adopt a federation. This talk is appropriate for those interested in figuring out the GraphQL adoption strategy.

Apollo Lounge (not a talk)

I spent an hour in between talks to talk to Hugh Willson, one of the Apollo engineers behind Apollo Client 3, about the performance bottleneck I saw in the beta release. The problem was that the Apollo Client took a long time to respond to a large query response (a tree of about ~2000 objects) even with the denormalization turned off. Due to the time constraint, we didn’t get to the bottom of the issue. But it was nice to see how an Apollo engineer goes about debugging the client and also get reassured that my configuration was not a problem.

Game Of Types: A Song Of GraphQL And TypeScript by Steven Musumeche

After seeing Danielle’s talk, Steven’s talk didn’t feel new to me. Especially because I am following the development process he outlined almost precisely. But if you ever wonder how all these generated types (whether they are from Apollo Tooling or GraphQL Code Generatorfit into your type system, this talk is for you.

(Video is not yet available.)


Day 2

useSubscription: A GraphQL Game Show by Alex Banks

The most entertaining talk I have ever been to. Alex made GraphQL subscription via WebSocket unforgettable. However, as I went to the talk expecting to see GraphQL streaming (a misunderstanding on my part), I ended up getting a little disappointed. If you are building a real-time app, watch this talk when it becomes available.

How do you get changes made to GraphQL? by Orta

Even though GraphQL’s governance mostly feels irrelevant, it matters to all of us. Orta talked through how the current GraphQL Foundation came about and how he saw through the changes. This talk isn’t for everyone, but if you like to contribute to the spec one day, watch this.

The future of GraphQL tooling and DX by Daniel Woelfel

The whole talk felt like a sales pitch of his company, OneGraph. But Daniel indeed showcased many inspiring tools leveraging: a point-and-click GUI to build a query, an Excel plugin to import GraphQL data into a spreadsheet, and a type checker for queries embedded in markdown documents. The talk was more inspirational than useful.

Building a faster checkout experience at PayPal with GraphQL by Vishakha Singh

Vishakha focused on how minimized payload and some intelligent caching using GraphQL improves PayPal’s performance. But honestly, I didn’t have a lot of take-away.

Client-side GraphQL at scale by Chris Sauvé (👍)

Shopify’s admin app has ~1000 GraphQL queries and ~700 entities. The company came up with a couple of useful libraries to mitigate this complexity. One library filled the gap in Apollo Client’s type system using collocated d.ts for GraphQL documents, which I found smart. Another autogenerated mock data based on GraphQL schema. I plan to adopt both of them at my current projects. If you are pressed for time, you don’t need to watch the talk since the documentation on the libraries do an excellent job of explaining what they do.

Edit: All videos can now be found here. I linked the videos to my review as well.

Changing Tide

About 2~3 years ago, most of rising Github repos were in Javascript or related to Frontend development. It was an exciting time to follow the nightly updates. So many projects to try and learn!

But no more. Most starred repos are now Go or wiki-like projects. I am not discounting these repos. I am just disappointed about the lack of movement.

Promise.any and Promise.allSettled

One benefit of Javascript proposal process is that there are always new things to learn and to make things more interesting. Promise.any and Promise.allSettled are not revolutionary but they will enable a new, more concise way to code. You can read more about them from here.

A caveat I found is that Promise.allSettled will never reject. It does make sense but at first I found myself thinking, “so when does it reject and what does it reject with?” I am interested to see how this behavior will be typed in Typescript.

When there is only one single-line text input field in a form, the user agent should accept Enter in that field as a request to submit the form.HTML Spec

I was looking into a bug that an embedded form would die due to the security restriction when you press enter inside the input. It turns out this random behavior was causing the issue 🤷

(I concede that we should handle form submit properly though)

Yet another JSON validator

https://github.com/joanllenas/ts.data.json

My first reaction was “well, you should use GraphQL.” Or, even if you don’t have control over your APIs, having two sources of truth feel very cumbersome just to validate data. I believe a better approach will be to generate a JSON validator based on the type information at compile time such as babel-blade, or react-docgen-typescript-loader. But then again, I don’t have a clear plan to achieve that, either.

Part 2: Typescript+Redux Best Practice at Vingle

Part 1: History of Redux State Management at Vingle

In this part two, I am going to describe our team’s current best practices to make Typescript work for you when working with Redux.

  • Creating Type-safe Actions and Reducers
  • Properly typing Redux Container

Creating Type-safe Actions and Reducers

Considering how reducers are just simple functions that accept two arguments, you would expect Typescript to work well with those two. States do. But actions, because dispatch accepts any types of arguments, cannot be typed safely without developers’ involvement. If you don’t type your actions, your reducer will end up in the not-so-ideal state:

function reducer(state = INITIAL_STATE, action: Redux.Action) {
switch (action.type) {
case ActionTypes.FETCH_USER: {
// simple case
return {
state,
userId: (action.payload as any).userId,
};
}
default: {
return state;
}
}
}
view raw type-safe-reducer.ts hosted with ❤ by GitHub

You can catch some of type errors with unit tests, but you will miss some properties and lose easy refactoring provided by Typescript. To acheive type-safety before Typescript 2.8, you could use string enum:

enum ActionTypes {
FETCH_USER = "FETCH_USER",
}
interface IFetchUserAction {
type: ActionTypes.FETCH_USER;
payload: { userId: string }
}
interface IOtherAction {
type: "____________________";
}
type Actions = IFetchUserAction | IOtherAction;
function fetchUser(userId: string): IFetchUserAction {
return {
type: ActionTypes.FETCH_USER,
payload: {
userId,
}
};
}
function reducer(
state = INITIAL_STATE,
action: Actions,
): IState {
switch (action.type) {
case ActionTypes.FETCH_USER: {
// in this closure, Typescript knows that action is of interface IFetchUserAction, thanks to enum ActionTypes.
return {
state,
userId: action.payload.userId,
};
}
default: {
return state
}
}

IOtherAction is needed so that Typescript won’t complain about default case in switch statement (that is, exhaustiveness checking). This works OK if you ignore the fact that there are essentially two duplicate type definitions in your action interfaces, and action creators. Starting with Typescript 2.8, you can use ReturnType to remove action interfaces. The code below is our way to type actions and reducers.

import { ActionCreatorsMapObject } from "redux";
// interface ActionCreatorsMapObject {
// [key: string]: ActionCreator<any>;
// }
type ActionUnion<T extends ActionCreatorsMapObject> = ReturnType<
T[keyof T]
>;
enum ActionTypes {
FETCH_USER = "FETCH_USER",
}
function createAction<T extends { type: ActionTypes }>(d: T): T {
return d;
}
export const ActionCreators = {
fetchUser(payload: {userId: string}) =>
createAction({type: ActionTypes.FETCH_USER, payload}),
}
type Actions = ActionUnion<typeof ActionCreators>;
function reducer(
state = INITIAL_STATE,
action: Actions,
): IState {
switch (action.type) {
case ActionTypes.FETCH_USER: {
// in this closure, Typescript knows that action is of ActionCreators.fetchUser's ReturnType.
return {
state,
userId: action.payload.userId,
};
}
default: {
return state
}
}

Typing Redux Container components

Typing Redux container components correctly is important to use, and test the components correctly. Before our team learned how to type components, we ended up with tests like this:

const Container = (props: { data: any; dispatch: Dispatch<any> }) => {
// render something and do something useful
return <div />;
};
const ConnectedContainer = connect()(Container);
describe("", () => {
let wrapper: ReactWrapper;
beforeEach(() => {
const store = mockStore(state);
wrapper = mount(<ConnectedContainer data dispatch={store.dispatch} />, { store });
});
});

So let’s dive in.

Before you try to type Redux container components properly, you need to understand the type definition of connect. Carefully read the code below I quoted from Redux type definition (comments are mine). The definition uses a lot of type overloading but I will go through some cases to help you understand what exactly goes on.

Please note that the definitions below are from @types/react-redux@5.0.19.

When you don’t pass in any argument to connect

This is when you only need dispatch inside your container.

const Container = (props: { data: any; dispatch: Dispatch<any> }) => {
// render something and do something useful
return <div />;
};
export default connect()(Container);
view raw simple-container.ts hosted with ❤ by GitHub

As there are no arguments to connect, all connect will do is to inject dispatch<any> into props.

When you pass in mapStateToProps to connect

If you want to map only state to props, say for render only components, you

type SearchData = { query: string };
type AppState = {
searchData: SearchData;
};
type Props = { query: string; data: any; dispatch: Dispatch<any> };
function mapStateToProps(state: AppState) {
return {
query: state.searchData.query,
};
}
const Container = (_props: Props) => {
// render something and do something useful
return <div />;
};
const A = connect(mapStateToProps)(Container);
<A data />; // this is valid
<A data dispatch={store.dispatch} />; // this isn't valid
view raw mapped-container.ts hosted with ❤ by GitHub

It almost looks like a magic as Redux type definition does a lot of heavy lifting for us. Let’s examine what actually happens inside the code above.

interface Connect {
<TStateProps = {}, no_dispatch = {}, TOwnProps = {}, State = {}>(
mapStateToProps: MapStateToPropsParam<TStateProps, TOwnProps, State>,
): InferableComponentEnhancerWithProps<
TStateProps & DispatchProp<any> & TOwnProps,
TOwnProps
>;
}

This^ connect definition is the overloaded type definition used. In the definition, mapStateToProps is expanded to

(initialState: State, ownProps: TOwnProps) => (
state: State,
ownProps: TOwnProps,
) => TStateProps;

So Typescript will infer TStateProps, and State to be {query: string}, and AppState from the argument mapStateToProps. InferableComponentEnhancerWithProps is expanded to

<P extends (TStateProps & DispatchProp<any> & TOwnProps)>(component: Component<P>): ComponentClass<Omit<P, keyof (TStateProps & DispatchProp<any> & TOwnProps)> & TOwnProps> & {WrappedComponent: Component<P>}
view raw container-type.ts hosted with ❤ by GitHub

And Typescript will infer P to be Props, and check whether the container component’s props is larger than the union of TStatePropsDispatchProp<any>, and TOwnProps.

If I put the logic above into code, it looks like the following:

type TStateProps = ReturnType<typeof mapStateToProps>;
type TOwnProps = Omit<Props, keyof TStateProps | keyof DispatchProp<any>>; // this results in { data: any }. But this isn't necessary and you can use {} without a problem.
const B = connect<TStateProps, {}, TOwnProps, AppState>(mapStateToProps)(
Container,
);
<B data />; // this is valid
<B data dispatch={store.dispatch} />; // this isn't valid
view raw summary.ts hosted with ❤ by GitHub

When you pass in both mapStateToProps and mapDispatchToProps to connect

This isn’t hard to understand once you understood how Redux type definition handles mapStateToProps. mapDispatchToProps is treated like mapStateToProps. For your reference, I included the overloaded type below.

interface Connect {
<TStateProps = {}, TDispatchProps = {}, TOwnProps = {}, State = {}>(
mapStateToProps: MapStateToPropsParam<TStateProps, TOwnProps, State>,
mapDispatchToProps: MapDispatchToPropsParam<TDispatchProps, TOwnProps>,
): InferableComponentEnhancerWithProps<
TStateProps & TDispatchProps & TOwnProps,
TOwnProps
>;
}
view raw both-container.ts hosted with ❤ by GitHub

When you also pass in mergeProps

This is also rather straightforward. Instead of merging TStatePropsTDispatchProps, and TOwnProps naively for the component definition, Connect will now depend on mergeProps to merge these props. The only additional check, (or inference) is whether mergeProps is of type (stateProps: TStateProps, dispatchProps: TDispatchProps, ownProps: TOwnProps): TMergedProps;.

What this means

First of all, congratulations on getting through all these different types! Now you get how Connect works. But, it turns out you don’t need to type things directly when you use Redux’s Connect. However, other HOC’s definitions will vary, and you will need to learn how their type systems work.

Extracredit (Typescript tips not related to Redux)

Know your types in React

Knowing React types helps your code to work with React seamlessly. Here is the usual go-to list for us.

React.Component<P, S>
React.StatelessComponent<P>
React.ReactElement = instantiated React Component
React.ReactNode = React.ReactElement + Renderable primitive types (object is not valid). `children` has this type
React.CSSProperties
React.ReactEventHandler
React.<Input>Event
React.HTMLProps<ElementType> = Used to extend your component props. Ex) TOwnProps & React.HTMLProps<HTMLDivElment>

How to type HOCs that inject props

The following code is an excerpt from react-intl. This type definition is straight-forward to set up, but expects the users of the library to know which props are injected into.

interface InjectedIntlProps {
intl: InjectedIntl;
}
function injectIntl<P>(
component: ComponentConstructor<P & InjectedIntlProps>,
options?: InjectIntlConfig,
): React.ComponentClass<P> & {
WrappedComponent: ComponentConstructor<P & InjectedIntlProps>;
};
// actual usage
interface IProps {
flag: boolean;
}
class Toast extends React.PureComponent<IProps & InjectedIntlProps> {
}
export default injectIntl<IProps>(Toast);
view raw hoc-types.ts hosted with ❤ by GitHub

Use Ambient Types to simplify your dependencies

This is an easy-to-miss option when you first start using Typescript. You should use typeRoots option to avoid adding unnecessary dependencies.

Afterword

As we develop, and maintain our React apps, we have encountered many bugs. Based on our experience, the harder-to-track, and more critical bugs often stemmed from typeless part of the code. That is why we are determined to type things both comprehensively, and correctly. This isn’t the farthest we can go with Typescript, but this is where we are at, and I hope this article has helped you understand Typescript and Redux more deeply.

Part 1: History of Redux State Management at Vingle

Part 2: Typescript+Redux Best Practice at Vingle

This post is a repost of my post at Vingle Tech Blog.

In this two-part post, I am going to go over the different flavors of Redux state management at Vingle and our thought process behind each iterations we went through over the last year and half. I hope this post guide how you structure your Redux states.

Genesis: Redux + Immutable.Map

My team chose React to create a small-scale mobile marketing website as a learning experiment. Our main project, at the time, was based on Rails, and Angular 1, and we were separating web applications from Rails to simplify, and speed up our deployment process. That meant we had to create everything from scratch: a new build pipeline, a new webpack configuration, while learning about the vast React ecosystem.

We heard that Redux simplifies debugging application states greatly, and, with the nightmarish memories of debugging Angular 1’s watchers, chose to adopt Redux. We also learned a bit about shouldComponentUpdate and React’s component lifecycle, and wanted to have an immutable state. I was already familiar with high-order immutable objects from my previous work (this), so Immutable.js was an obvious choice.

In the end, we have Redux setup looking like this:

import { Map } from "immutable";
// reducers
const INITIAL_STATE = Map({ post: null, isLoading: false });
function postReducer(state = INITIAL_STATE, action) {
switch (action.type) {
case "FETCHED": {
return state.withMutations(currentState =>
currentState.set("post", action.payload.post).set("isLoading", false),
);
}
default: {
return state;
}
}
}
// action creators
function fetchedPost(post) {
return {
type: "FETCHED",
payload: {
post,
},
};
}
view raw genesis.js hosted with ❤ by GitHub

1st Iteration: Typescript + Redux + Immutable.Map

Once we have gotten more used to React, and Redux, and proven that we could develop new features much faster on the mobile page, we started migrating our main web application to React. But unlike the proof-of-concept mobile page, this app would have dozens of routes and reducers, and much more complex components, so we chose to use Typescript for this app.

Unfortunately, Immutable.Map with different types of values (number, boolean, other Maps, or Lists, for example) does not play well with Typescript. The following is a Typescript definition of Immutable.Map:

interface Keyed<K, V> extends Collection<K, V>, Iterable.Keyed<K, V> {}
interface Map<K, V> extends Keyed<K, V> {
set(key: K, value: V): Map<K, V>;
setIn(keyPath: Array<any>, value: any): Map<K, V>;
}
view raw first-iteration-1.ts hosted with ❤ by GitHub

As you can see, there isn’t a good way to specify different types of a Immutable.Map’s values. So we ended up doing this hacky workaround.

// scaffolding
interface IPostStateImmutable {
get(key: "post"): IPostImmutable | null; // IPostImmutable is also another hacky interface like IPostStateImmutable.
get(key: "isLoading"): boolean;
set(key: "post", value: IPostImmutable | null): IPostStateImmutable;
set(key: "isLoading", value: boolean): IPostStateImmutable;
withMutations(
mutator: (mutable: IPostStateImmutable) => IPostStateImmutable,
): IPostStateImmutable;
}
// reducers
const INITIAL_STATE: IPostStateImmutable = Map({
post: null,
isLoading: false,
});
function postReducer(
state: IPostStateImmutable = INITIAL_STATE,
action,
): IPostStateImmutable {
switch (action.type) {
case "FETCHED": {
return state.withMutations(currentState =>
currentState.set("post", action.payload.post).set("isLoading", false),
);
}
case "UPDATED_TITLE": {
return state.setIn(["post", "title"], action.payload.title);
}
default: {
return state;
}
}
}
// actions stay the same
view raw first-iteration-2.ts hosted with ❤ by GitHub

Needless to say, this pattern is painful to maintain, and hard to guarantee correctness. Typescript got in the way rather than helping us.

2nd Iteration: Typescript + Redux + Immutable.Record

So we looked for a better way to tie Typescript and Immutable.js together. Then we found that there was another Immutable class called Immutable.Record and a library called typed-immutable-record. With the library, we created a type-safe Immutable Record:

import { TypedRecord, recordify } from "typed-immutable-record";
// scaffolding
interface IPostState {
post: IPost | null;
isLoading: boolean;
}
interface IPostStateRecordPart {
post: IPostRecord; // this interface is created in a similar fashion.
isLoading: boolean;
}
interface IPostStateRecord
extends TypedRecord<IPostStateRecordPart>,
IPostStateRecord {}
function recordifyPostState(plainState: IPostState): IPostStateRecord {
return recordify<IPostStateRecordPart, IPostStateRecord>({
post: plainState.post
? recordify<IPostRecordPart, IPostRecord>(plainState.post)
: null,
isLoading: plainState.isLoading,
});
}
// reducers
const INITIAL_STATE: IPostStateRecord = recordifyPostState({
post: null,
isLoading: false,
});
function postReducer(
state: IPostStateRecord = INITIAL_STATE,
action,
): IPostStateRecord {
switch (action.type) {
case "FETCHED": {
return state.withMutations(currentState =>
currentState.set("post", action.payload.post).set("isLoading", false),
);
}
case "UPDATED_TITLE": {
return state.setIn(["post", "title"], action.payload.title);
}
default: {
return state;
}
}
}
view raw second-iteration.ts hosted with ❤ by GitHub

It took some time for us to understand how to scaffold Record interfaces correctly but we managed to create type-safe redux states with both dot notations, and helper methods like setIn or withMuationsHowever, as you can see from the code above, we had to create a large number of interfaces, especially when our states were deeply nested. Once we got the pattern down, it wasn’t difficult to follow the pattern but it was a lot of work which disincentivized our team to create smaller, and isolated reducers. But we didn’t know any better, so we carried on.

3rd Iteration: Typescript + Redux + Typescript Readonly Interfaces

During a random conversation with an engineer at another startup, I learned about readonly properties in Typescript, and realized those properties could replace Immutable.js completely.

// scaffolding
interface IPostState
extends Readonly<{
post: IPost | null;
isLoading: boolean;
}> {} // this has to be a Readonly interface as well.
// reducers
const INITIAL_STATE: IPostState = {
post: null,
isLoading: false,
};
function postReducer(state: IPostState = INITIAL_STATE, action): IPostState {
switch (action.type) {
case "FETCHED": {
return {
post: action.payload.post,
isLoading: false,
};
}
case "UPDATED_TITLE": {
return {
state,
post: {
state.post,
title: action.payload.title,
},
};
}
default: {
return state;
}
}
}
view raw third-iteration-1.ts hosted with ❤ by GitHub

By using Readonly interfaces, the scaffolding is reduced to a quarter by removing RecordPartRecord, and recordify. However, there is a problem with this approach when you need to update deeply; the case above UPDATED_TITLE is such an example. During the conversion, we had some codes go out of hand like this:

return {
state,
post: {
state.post,
author: {
state.post.author,
relation: {
state.post.author.relation,
following: true,
},
},
},
};

4th Iteration: Typescript + Redux + Typescript Readonly Interfaces + Normalizr

We could solve this problem by adopting a deep merge library, but we feared that those libraries may not be type-safe. After giving some thoughts, we determined that the real problem was with the deeply nested structures of our states and planned to flatten the states by normalizing. Of the two popular normalizing libraries, redux-orm, and normalizr, we chose the latter for its simplicity.

Our final, and current version of redux looks like the following:

// post reducer
interface IPostState
extends Readonly<{
postId: number | null;
isLoading: boolean;
}> {}
const INITIAL_STATE: IPostState = {
postId: null,
isLoading: false,
};
function postReducer(state: IPostState = INITIAL_STATE, action): IPostState {
switch (action.type) {
case "FETCHED": {
return {
postId: action.payload.postId,
isLoading: false,
};
}
default: {
return state;
}
}
}
// normalized entity reducer
interface IEntityState
extends Readonly<{
posts: {
[postId: number]: INormalizedPost;
};
}> {}
function entityReducer(
state: IEntityState = { posts: {} },
action,
): IPostState {
switch (action.type) {
case "ADD_ENTITIES": {
return {
state,
posts: {
state.posts,
entities.posts,
},
};
}
case "UPDATED_TITLE": {
const postToUpdate = state.posts[action.payload.postId];
if (!postToUpdate) {
return state;
}
return {
state,
posts: {
state.posts,
[action.payload.postId]: {
postToUpdate,
title: action.payload.title,
},
},
};
}
default: {
return state;
}
}
}
// action creators
function fetchedPost(postId: number) {
return {
type: "FETCHED",
payload: {
postId,
},
};
}
function addEntities(entities: Partial<IEntityState>) {
return {
type: "ADD_ENTITIES",
payload: {
entities,
},
};
}
// container component
function mapStateToProps(state: IAppState, _routeProps: any) {
return {
post: denormalize(state.postState.post, postEntity, state.entities),
};
}
view raw fourth-iteration.ts hosted with ❤ by GitHub

Afterword

When I look back, part of me regret that we didn’t do more research which could have saved a lot of time; this collection of redux-related libraries would have been helpful, and normalizing is already in official Redux documention. However, part of me also feel like we would have never appreciated the utility of these libraries and techniques because we didn’t know the downsides of not using those libraries and techniques. And that is why I wrote this post; I hope you understand what problems lie ahead and save yourself some time.