Learning RxQ 1.0

Great news! RxQ 1.0 has officially been released. It comes with many improvements and lots of gooey documentation and examples to help people get going on making reactive Qlik apps.

However, documentation will only get you so far. In reality, these examples won’t teach you how to approach Qlik reactively, nor what to do when your code just won’t do what you want it to do. That’s why in honor of this new release, I am launching two new live streaming “courses” that will walkthrough RxQ usage in detail. The courses are:

  1. RxQ Recipes Explained – in this series of videos, I will walk through the new RxQ docs and then re-code the recipes one by one, explaining the how and why behind them along the way
  2. Building a Reactive Listbox – this series will start from a blank project and build out a fully functional listbox on par with the out of box Qlik Sense object, explaining Qlik Engine API and RxJS concepts along the way

These courses will be covered in sequence and will be live streamed on https://www.twitch.tv/kinisi. Follow along there for updates on live stream times. Videos will be archived on my YouTube channel.

Looking forward to learning with you!


Little Bits: JavaScript’s Array reduce()

Here’s a quick intro to a bit of code that I love: JavaScript’s Array reduce(). reduce is a function that takes an array of items in JavaScript and transforms it into a single item. A user-defined function is provided that dictates how the final value is calculated as the reduce function goes through each item in the array. For example, you can write a simple sum function with a reducer by keeping a running total of the values in the array like so:

[1, 2, 3, 4, 5].reduce((acc, curr) => acc + curr);
// -> 15
An animation of the reducer in action

The reduce function can be used for all kinds of operations, including chaining together sequential async calls based on an array. To learn more, check out the video below:

Building a Qlik Sense Extension – A Video Breakdown

Over the last month and a half, I have launched a couple of video channels where I will be explaining different web development concepts while live coding. My goal is for people to see not only how to do something, but also to see how challenges arrive, methods for debugging them, etc.

To follow along in real time, follow the broadcasts on the Twitch channel. The videos are all archived on my YouTube channel.

The first set of videos I’ve posted is a 3-parter on building a Qlik Sense Extension from scratch. I hope they will help people new to the extension game get a sense of what it takes to build one from end to end.


Visual Trumpery in Atlanta

Alberto Cairo presenting in Atlanta. Photo by Caitlin Kokenes

Last month, the Atlanta Data Visualization meetup hosted Alberto Cairo at Georgia Tech as part of his Visual Trumpery lecture tour. On this tour, Alberto is speaking about the proper and ethical use of visualization and how to detect and deter fake visualization. We were very excited in Atlanta to be the first US city to host this talk.

“Visual Trumpery” is an insightful lecture that I recommend to anyone with an interest in the communication of data. Alberto presents his arguments passionately, with numerous examples that engage the audience and make his points tangible. He also recommends resources along the way for those interested in going deeper; these seemed to be a big hit as many of the questions in the Q&A turned back to these resources and how people could learn more.

The lecture centers around the concept of “graphicacy”, a distinction Alberto places as coming logically after literacy, articulacy, and numeracy. The idea of being “graphicate” involves the ability to interpret a visualization and determine what it is and is not telling you, without being fooled by your own personal biases or by tricks put in by the author. Alberto introduced two points around “graphicacy” that especially resonated with me:

  1. We must do a better job of evaluating the legitimacy of sources before we share them. I am guilty of breaking this rule; with social media, it is so easy to share a chart that comes across my feed without assessing its credibility. The better the chart aligns with my own biases, the faster I seem to reach for the retweet button.
  2. We must be more cognizant of the uncertainty in data and ensure that it is communicated properly. The 2016 presidential election comes to mind here. There were many issues with how uncertainty was handled by the media and conveyed to the public, thus causing frustration all around. In some places the media was overconfident in it’s predictions; in other places, people did not understand the uncertainty of the predictions they were receiving, causing backlash against models that weren’t so bad (see the FiveThirtyEight’s forecast).

The Visual Trumpery tour is continuing in the US in October – if you have a chance to attend, take it. Tour dates and other information can be found at https://visualtrumperytour.wordpress.com/.

If you are in the Atlanta area, please consider joining the Atlanta Data Visualization meetup. We meet a few times a year to learn about visualization work happening around the community. We are always looking for speakers, so if you have a topic or project you want to present about, please don’t hesitate to contact me.

– Speros


Creating a Progress Bar with RxJS and d3-timer

Previously, I’ve written about reactive programming and spoken at a few events about the usefulness of it in building highly interactive applications. It was especially handy in Conflicts in the Democratic Republic of Congo, the latest data visualization app that I built for Qlik’s 2017 Hackathon with the UN. The Timeline page uses an animated timeline to visualize changes over time but gives users the ability to control the timeline like a video player using a progress bar.

The progress bar in action

This progress bar is a great example of how you might use RxJS to handle an interactive component with lots of moving parts. I will show you how to create a simplified version of it, explaining the process along the way.

To keep things simple, I will walk you through building a basic, utilitarian progress bar with a range input element. You can take these concepts and easily create a fancy slider using other means such as SVG for example. Here is what our final progress bar will look like:


We will leverage d3-timer and RxJS to create the progress bar. You can find the working example with code here.

Our Requirements

First, let’s review what our progress bar should do:

  1. It should display the current time of the progress in milliseconds
  2. It should max out at 10 seconds
  3. It should have a “Play” button that starts the progress bar moving forward from whatever position it is in
  4. It should have a “Pause” button that pauses the timer, and a “Stop” button that resets the timer back to 0
  5. It should have 2 speed options: 1x and 2x actual time
  6. A user should be able to click or drag the slider to any point to change the current position of the timer

The Code

Let’s go through the code section by section, with explanations for each piece.

Creating a timer observable

Let’s set a max duration variable for our timer of 10,000 milliseconds, which we will use later. Then, we’ll create a timer Observable by wrapping d3.timer:

// The time duration of our timer
var maxDuration = 10000;

// d3.timer wrapped in an observable
const timer$ = Rx.Observable.create((observer) => {
    // On subscribe, create a new timer
    const t = d3.timer(elapsed => {
        // Pass the elapsed time from the timer
    // Stop the timer when unsubscribed
    return t.stop;

timer$ is an Observable that will create a new d3 timer instance for each subscription. When an unsubscribe event happens, the timer for that observer will be stopped. d3 timer does not have any pause functionality, but rather only does two things: it starts spitting out cumulative times since it was created in milliseconds as often as it can, and it can stop itself. Thats it. Therefore, to build a timer that can be paused, set to different positions, sped up or slowed down, etc., we will have to manipulate this observable further down the line.

Event Streams for the Buttons

Next, we can create event streams for the various buttons. For play, pause, and stop, we just want to know that the events happened. We don’t need any specific metadata from them. For the speed buttons however, we want to capture what speed they correspond with. That way, we can merge them into a single Observable called speed$ that we can use to rely on the speed value at any point in time. We will finish the speed$ Observable off with the operator startWith(1) to set the starting value to 1.

// Create click events for the play, pause, stop, and speed buttons
const play$ = Rx.Observable.fromEvent(document.querySelector("#play"), "click");
const pause$ = Rx.Observable.fromEvent(document.querySelector("#pause"), "click");
const stop$ = Rx.Observable.fromEvent(document.querySelector("#stop"), "click");

// Map to speed 1x button to the value 1 on click
const speed1x$ = Rx.Observable.fromEvent(document.querySelector("#speed1x"), "click")

// Map the speed 2x button to the value 2 on click
const speed2x$ = Rx.Observable.fromEvent(document.querySelector("#speed2x"), "click")

// Merge the speeds together into 1 stream, and start with 1 by default
const speed$ = speed1x$


Add Scrubbing Functionality

Changing positions in a timer or progress bar is sometimes referred to as scrubbing. We have two conditions under which scrubbing should happen:

  1. The user could press the Stop button, which should reset the position to 0.
  2. The user could click or drag the slider anchor to a new position along the bar

We can model both of these events together into a single stream for a scrubbed position:

// Create the change event for the slider for when a person clicks on the timer. 
// Map it to the new value of the slider and convert that to a position on our timeline
var slider = document.querySelector("input");
var sliderChange$ = Rx.Observable.fromEvent(slider, "change")
    .map(evt=>evt.target.value / 100 * maxDuration);

// An observable of new times to jump to based on either stopping or clicking the timer range
const scrub$ = stop$.mapTo(0)

We already have an Observable for stop events, so we just need to define one for manually changing the slider anchor position and then merge them together. We create an Observable from a change event on our slider and use the map() operator to get the current value in milliseconds. Since our slider goes from 0 to 100 and our timer goes from 0 to 10,000 milliseconds (defined in our maxDuration variable), we can convert the current slider value to a position in milliseconds by dividing the current slider position by 100 and then multiplying by the time duration.

For scrub$, we want to take our stop$ Observable and map the value to the constant 0, which will be the new scrubbed position. Then, we can merge it in with any sliderChange$ values to get a single stream for all scrubbed positions.

Calculating Incremental Intervals from the Timer

This is admittedly where the logic gets a bit complicated; despite this complication, RxJS makes it easy to implement with its operators. Recall the d3 timer Observable from before. As stated, d3 timer does not have any functionality for pausing or skipping around. It can only play once and stop once. But for our progress bar, we want to be able to pause and scrub as needed. This will require us to use a new timer whenever we pause our bar, since we have to stop an existing timer. Furthermore, we can’t rely on our existing timer when we scrub because it only emits cumulative values.

Therefore, we are not going to use d3 timer to keep track of cumulative times elapsed. Instead, we are going to use it to emit incremental intervals of time. By incremental intervals, what I mean is that we are going to have it emit the milliseconds that have passed since the timer’s previously emitted value, aka the gaps in time. By having these elapsed intervals instead of a cumulative value, we can keep track of our current position in another way and use the d3 timer to just increment our progress position from whatever position it is in at that moment. We’ll do that later, but first let’s create an Observable of the intervals:

// Whenever someone presses play, create a new timer and calculate the intervals between each elapsed time
const interval$ = play$.switchMap(() => 
        .startWith(0) // start with 0
        .pairwise() // emit the last 2 values at a time in an array
        .map(([a, b]) => b - a) // calculate the difference between the last two values
        .combineLatest(speed$, (interval, speed) => interval * speed) // multiply the interval by the current speed
        .takeUntil(pause$.merge(stop$)) // emit these timer intervals until a pause or stop event

First, we take the play$ stream and use a switchMap that will produce a new timer$ Observable each time that Play is clicked. This timer$ Observable will start with the value 0, emit the last 2 seen values at all times via the pairwise operator, and then map the output to the interval between the last two seen values. This gets us our incremental progress, rather than cumulative progress, from the d3 timer.

We add a couple other features here however. Previously we created a speed$ Observable that will emit the values 1 or 2 based on what speed the user has selected. We can incorporate this with our intervals to adjust the speed of our timer. The logic here is straightforward: if we want our timer to go twice as fast, we just need to multiply our incremental interval by 2! So every 50ms passed will be output as 100ms being passed, etc. We use combineLatest for this operation, which will combine the current value of an Observable with the latest seen value from a different Observable. Finally, we want our timer to stop whenever the user presses Pause or Stop. The takeUntil operator will subscribe to the Observable until an input Observable fires; in this case, our input Observable is the merging of pause$ and stop$ so that either one will stop the timer.

Getting the Progress Position

All of this code has led us to our ultimate goal: being able to track the position of our progress bar. We’ve coded two distinct ways that the position can change: either the timer pushes it forward, or a scrub event happens that causes it to skip to a new position. In order to accommodate both use-cases, we can take a redux-like approach to modeling this value. We can model the progress position as a “store”, with “reducers” that can be applied to that store. In our case there are two “actions”: increment the store value, or skip the store value to a new value.

We’ll start by mapping our “actions” to “reducers” in the form of functions that take in our progress value and return a new value. For the interval$ stream, it will return a function that takes in our previous value and returns the previous value + the latest interval. For the scrub$ stream, it will return a function that disregards the previous value and just returns a new value from scrubbing. Then we can apply those functions to our current value, using the scan operator to produce an accumulator function. Our scan will start with the value 0 and take in a function for how to modify that value. As we defined before, our functions will return a new value.

Finally, we don’t want our progress bar to exceed our defined duration so we will use a filter operator to remove any values greater than 10,000 ms:

// Use the interval stream with the scrub stream to either add time or jump in time
const time$ = interval$
    .map(m => (v) => v + m) // When interval stream fires, pass a function that takes previous value and adds latest interval
    .merge(scrub$.map(m => (v) => m)) // When the scrub stream fires, jump to the new value
    .startWith(0) // start the time at 0
    .scan((val, fn) => fn(val)) // for each value, apply the latest function that fired (either increment or jump)
    .filter(t=> t <= maxDuration); // only take times within our max duration range

Updating the DOM

Now that we have a time$ Observable for our progress position, we can use that value to update the DOM. We will update our ticker to display the current value and position the slider to its correct position based on the time and overall duration:

// Subscribe: Print the time and update the slider position
    document.querySelector("#time").innerHTML = t
    slider.value = t / maxDuration * 100;


And there you have it! Definitely some complex topics in there, but it’s a testament to the Observable pattern that you can define complex functionality like that in such a declarative format. For more details on the various methods and operators we used to create and transform Observables, I find the API reference very handy.

– Speros

Conflict in the Democratic Republic of Congo

Map of violent conflicts in the DRC

As mentioned in a previous post, Axis Group took first place in the Qlik’s latest hackathon at Qonnections 2017. The hackathon was a partnership with the UN to look at data from the ACLED about armed conflicts in and around the Democratic Republic of Congo. The teams had 2 weeks to put together an application for exploring this data with visualizations to find insights in the data.

I’m excited to finally share this project online. It’s far from perfect due to the quick, hacky nature of the contest, but I am proud to display the work that our team of designers, developers, and data scientists were able to do in such a short period of time on an important and sobering dataset.

Check out the project here: Conflict in the Democratic Republic of Congo


Visualizing Conflict in Africa

Fatalities from conflicts in the DRC and its neighbors

The Visual Analytics team at Axis Group recently won a data viz hackathon with an exploratory visualization tool for analyzing conflict data in and around the Democratic Republic of Congo. I’m very proud of the work that our team did and can’t wait to share the full application as well as blog posts on some of the design and technical lessons we learned along the way. The application needs a little cleanup before it goes live, but in the meantime I’ve received a lot of questions about the solution so I’d like to share some info here as a preview.

The Hackathon

At Qlik’s annual Qonnections conference this year, Qlik Branch partnered with the UN to host a hackathon. The goal of this hackathon was to take an open set of data provided by Qlik and the UN and produce a visual tool for exploring that data set to answer key questions. The UN provided a data set from the Armed Conflict Location & Event Data Project that provided information on conflict events that have happened in Africa in the last 20 years. The event data includes information on associated fatalities, key actors in the events, location and timing, and notes with a description of the event.

The Hackathon participants had 2 weeks to load the data into Qlik’s platform and use the Qlik API’s to produce a visual analysis that the UN could use to answer key questions about the dataset, such as:

  • how have the major actors changed over time?
  • where is the violence happening, and how are civilians impacted?
  • which actors operate across country borders?

For the scope of the project, the UN placed an emphasis on the conflict going on in the Democratic Republic of Congo. The dataset reveals the horrible nature of these conflicts and the effects on innocent civilians. In our visualization design we sought to provide analytical tools for exploring this data while staying true to the reality of the situation and conveying the horrors of the events.

Evolution of the Conflicts Over Time

The dataset provided goes down to the individual event level at its most granular level. This level has participating actors tagged to it – these actors include government forces, political militias, rebels, protestors, etc. We wanted to take this data and use it to show how the conflict has changed over time.

From our initial analysis of the data, one thing became extremely clear: the conflict data is messy because the situation is messy. As we looked at the data from year to year, we saw a constant tide of new actors coming in and out of the picture. Part of our human nature is to be drawn to stories, and in many of our stories there is a common theme of a good guy vs. a bad guy. Unfortunately, reality is much more complex than this and we wanted to convey that complexity to our audience. We also wanted them to feel the cumulative impact of all of this conflict on the region, while giving a user the opportunity to interact with the data and explore it at will.

With that goal in mind, we produced an interactive map that animates the build up of conflicts over the 20 year period from 1997 to 2016:

Animating events year over year in the DRC

The view above behaves like a video and an interactive visualization simultaneously. As the screen animates, you can interact with any data point on the screen, such as hovering the the individual events on the map to see more details in a tooltip. The list of top actors on the right side resorts itself based on year so that you can see the rapid changes in the main perpetrators in the country. In this screenshot, you can see the LRA rise to the top and witness their outpouring of violence on civilians in the top right corner of the country. Key notes from the data set are highlighted in the top left to give some extra details on the kind of violence that is being seen from year to year.

While it animates like a video, you can also stop the animation and go back and forth to any point in time using the line chart. This feature makes it easy to switch between passively observing the evolution and actively exploring the data at key points in time.

The Actor Network

One key question posed in the UN’s Hackathon brief was about the interactions between actors. Specifically, they want to better understand how the actors are fighting or allying with each other and how those relationships change over time.

We wanted to answer this question with a network diagram but wanted to avoid producing a pile of spaghetti that was impossible to interpret. Specifically, we wanted readers to be able to determine:

  • Who are the major plays in the network at any point in time?
  • How are the actors distributed across type of actor?
  • For any given actor, who are they in conflict with and who are they allied with?
  • How do these relationships change year over year?

We decided to implement a network diagram with some clustering and interactivity to help answer these questions.

Actor interactions, colored and grouped by actor type

Each node in our network is sized by their relationship to violence – the bigger the circle, the more fatalities associated with events involving that actor. In order to clean up our network, we clustered the nodes by actor type at the overview level. The clustering allows you to see the amount and size of actors within each group. The clustering immediately brings out some questions that warrant further analysis, such as why there are no interactions between ethnic militias and rebel forces.

Animation is used as the data transitions from year to year to emphasize how the network changes. In the example above, you can see how the ethnic militia groups have a lot of turnover from year to year.

Single actors can also be focused on in the network. Hovering provides a quick glimpse of the actor’s relationships within the larger network. Clicking on the actor brings it into focus at the center of the screen and splits its relationships into two groups: conflicts to the left and alliances to the right.

The Technical Bits

This project presented new technical challenges that our team hadn’t faced before, and I’m looking forward to sharing how we overcame those challenges in future posts. These posts will include:

  • Using deck.gl to map out thousands of events animating over time
  • Setting up an animation timer with pause, jumping forward and backward, and replay with a combination of d3-timer and RxJS
  • Creating piece-wise animation functions to calculate the visual state of events on our map at any point in time
  • Using d3-force from v4 of d3 to create our network diagram
  • Cleaning up our mapping data to get the appropriate region polygons for our choropleths

The 3 Best Things I Saw at Qonnections 2017

Qonnections 2017 was this past week, and it was a whirlwind of a conference. Qlik debuted all kinds of new features and plans for their products that left a big impression on the attendees. As a developer who focuses on Qlik’s APIs, there were 3 things that stood out to me the most:


Qlik demonstrated a new visualization framework that they are creating called Picasso. Picasso is a declarative approach to custom visualization. As a developer, you write a JSON spec that describes what your visualization should look like. Picasso is then able to parse that spec and output a visualization to either SVG or Canvas. It is very similar to Vega-lite, although it appears to have better support for responsive design.

If Picasso is executed properly, it should enable developers to build custom charts much faster than they can today, and also easily share and modify charting code as well. Very much looking forward to playing with this one.

Offline mode

During Josh Good’s talk on his 5 favorite things coming in June, Vinay Kapoor had the audacity to interrupt with a 6th favorite thing. He didn’t disappoint as he showed the Qlik Engine running offline on mobile devices. Via a Qlik Sense native app, he was able to sync his mobile app with a QVF on a server and then use that app offline, taking full advantage of the Associative Model. The ability to run QIX on a mobile device has huge implications for what the Engine can be used for – I can imagine a future where you build a mobile app that has nothing to do with dashboarding but uses the QIX Engine to power its data experience.

The Diversity and Quality of the Hackathon Submissions

I attended the very first Qlik Hackathon in Orlando in April 2014. Qlik Sense and its APIs had just been released, so the developers assembled that day found themselves facing an uphill battle as they wrestled with a brand new technology. Some interesting solutions came out of that day – two that I specifically recall are Ralf Belcher and Torben Seebach implementing a D3 chord diagram working in an extension, and Jason Mondesir and myself creating a table with minigraphics that could be adjusted by the user. While these solutions were a good first step, they were just the tip of the iceberg for what could be accomplished with the APIs.

Fast forward a few years and you can see the massive progress of the Qlik web dev community in the latest Hackathon. This year’s challenge was to visualize conflict data from Africa, and the results from the 6+ teams dazzled. They included a cloud heat map with MapBoxGL, a search bot, a refugee migration map, and several variations on network diagrams.

A few of the Hackathon submissions

It was amazing to see how far the community has come and the creative solutions they produced with the APIs. I can’t wait to see what’s next from this group.

I’m also super proud of the Axis Group team for taking home the Hackathon’s top award and want to recognize the following people who contributed significantly to the project but weren’t recognized at the conference: Gen Ellsworth, Liza George, Manasvi Lalwani, Jessie Lian, Scott Reedy, and Tim Wright.

Making QlikView Dance with Error Bars

While my life these days tends to center around Qlik Sense and it’s APIs, occasionally QlikView pops its head back into the picture.

At Axis Group, we employ UX designers with a specialization in data visualization that work with users to understand their challenges and then produce designs for dashboards that can best help them address their problems. While these designers try to stay within the constraints of whatever technology will be used for implementation, sometimes they create a chart that is just on the fringes of feasibility.

I received an email on such a chart the other day that needed to be implemented in QlikView. The design showed this chart:

The chart plots events that happen over time in a dot plot. The part that the developer was unsure about was the markers on the x-axis. The question was, could we produce this design pixel perfect in QlikView?

After some exploration, the answer turns out to be yes. We used a combo chart to produce the dots, which gave us access to an additional feature: error bars. In QlikView, error bars can be defined with a central point and a line above and below that central point. They look like a capital serif “I”:

In our scenario, we want to use the middle part of the error lines while hiding the ends to produce the effect of a marker on the axis. So how to hide the ends?

The bottom end of the error bar can be hidden by placing it directly on top of the x-axis and matching the colors so that it blends into the axis. This overlay gives the illusion that the error bar is protruding from the x-axis. We use a static y-axis min to match the static value we set for our error bar bottom so that they always match.

That leaves the top of the error bar. This piece can be hidden with a well-placed reference line. We create a reference line that is white to match the chart background and place it at the same location at the top of the error bars, which covers them.

What’s remaining is the middle of the error bar. After setting the y-axis min and max to match the spacing we want and hiding elements we don’t need like the y-axis and legend, we get the following:

There are a number of creative uses for error bars that you can leverage in QlikView to make some unique chart forms that don’t come out of the box. Check out the QVW to play more with this example.

OpenVis Conf 2017

If you like data, visualization, and the web, you got to find your way to OpenVis Conf next year. 2017 marked my third year in a row and it was as stimulating and inspiring as ever. Insightful talks, great people and a fun environment – can’t ask for much more than that.

The calm before the storm

I honestly recommend watching all of the videos but here were a few highlights for me:



Mike showed off his latest project: a reactive programming environment called d3-express. I’ve gotten my hands dirty with RxJS over the last year and fallen in love with reactive programming, so its exciting to see Mike’s take on this paradigm. d3-express looks like a rapid way to code interactive projects and I’m interested to see if that will have a backwards effect and influence people to build more reactive tool and plugins for d3.


Amelia provided lots of practical considerations to keep in mind when working with map polygons that I had never known before. Specifically, her explanation of how polygons are essentially binning data and therefore are subject to different outcomes based on how those polygons are assigned was insightful.


This may have been my favorite talk, since it specifically applies to an API tutorial project that I am working on right now. But I think the lessons departed by Catherine and Rahul about how to build a tool that is approachable by a newcomer applies to anyone building anything, even if you’re an expert building tools for other experts.


Amanda Cox’s KEYNOTE

Amanda’s exploration of how we communicate uncertainty and where we might go from here was informing and left me with a lot to think about. Curious to see how this communication evolves in the BI space as predictive analytics becomes more mainstream.


Noah’s session may as well have been a standup comedy routine (he killed). His exploration of the problem space was both entertaining and intriguing. It actually informed me about a problem I’ve seen before when animating lines in an interactive (but never did anything about). Turns out there’s a solution for that.


Kanit “Ham” Wongsuphasawat, Dominik Moritz, & Arvind Satyanarayan presented on the amazing work they’ve been doing with Vega-lite. I walked into this a bit of skeptic – I’d played with an earlier version of Vega and while I saw the potential, I struggled a bit with it and found the documentation lacking for my use case. The Vega-lite restored my faith though and I’m looking forward to jumping into the beta and hopefully clearing those hurdles that I ran from before.


I tried to pick my top 3 from each day but couldn’t help but include 4 for Day 2. Loved the color tools that Connor showed and the evidence he presented to validate the effectiveness of them. I’m anxious to play with the color palette generator on my next project.


Again, while those were my favorites, all of the talks were excellent and I’m looking forward to revisiting the videos when they’re released.