Our online courses (all are free and have no ads):

Our software: fastai v1 for PyTorch

fast.ai in the news:


Deep Learning from the Foundations

Today we are releasing a new course (taught by me), Deep Learning from the Foundations, which shows how to build a state of the art deep learning model from scratch. It takes you all the way from the foundations of implementing matrix multiplication and back-propogation, through to high performance mixed-precision training, to the latest neural network architectures and learning techniques, and everything in between. It covers many of the most important academic papers that form the foundations of modern deep learning, using “code-first” teaching, where each method is implemented from scratch in python and explained in detail (in the process, we’ll discuss many important software engineering techniques too). The whole course, covering around 15 hours of teaching and dozens of interactive notebooks, is entirely free (and ad-free), provided as a service to the community. The first five lessons use Python, PyTorch, and the fastai library; the last two lessons use Swift for TensorFlow, and are co-taught with Chris Lattner, the original creator of Swift, clang, and LLVM.

This course is the second part of fast.ai’s 2019 deep learning series; part 1, Practical Deep Learning for Coders, was released in January, and is a required pre-requisite. It is the latest in our ongoing commitment to providing free, practical, cutting-edge education for deep learning practitioners and educators—a commitment that has been appreciated by hundreds of thousands of students, led to The Economist saying “Demystifying the subject, to make it accessible to anyone who wants to learn how to build AI software, is the aim of Jeremy Howard… It is working”, and to CogX awarding fast.ai the Outstanding Contribution in AI award.

The purpose of Deep Learning from the Foundations is, in some ways, the opposite of part 1. This time, we’re not learning practical things that we will use right away, but are learning foundations that we can build on. This is particularly important nowadays because this field is moving so fast. In this new course, we will learn to implement a lot of things that are inside the fastai and PyTorch libraries. In fact, we’ll be reimplementing a significant subset of the fastai library! Along the way, we will practice implementing papers, which is an important skill to master when making state of the art models.

Chris Lattner at TensorFlow Dev Summit
Chris Lattner at TensorFlow Dev Summit

A huge amount of work went into the last two lessons—not only did the team need to create new teaching materials covering both TensorFlow and Swift, but also create a new fastai Swift library from scratch, and add a lot of new functionality (and squash a few bugs!) in Swift for TensorFlow. It was a very close collaboration between Google Brain’s Swift for TensorFlow group and fast.ai, and wouldn’t have been possible without the passion, commitment, and expertise of the whole team, from both Google and fast.ai. This collaboration is ongoing, and today Google is releasing a new version of Swift for TensorFlow (0.4) to go with the new course. For more information about the Swift for TensorFlow release and lessons, have a look at this post on the TensorFlow blog.

In the remainder of this post I’ll provide a quick summary of some of the topics you can expect to cover in this course—if this sounds interesting, then get started now! And if you have any questions along the way (or just want to chat with other students) there’s a very active forum for the course, with thousands of posts already.

Lesson 8: Matrix multiplication; forward and backward passes

Our main goal is to build up to a complete system that can train Imagenet to a world-class result, both in terms of accuracy and speed. So we’ll need to cover a lot of territory.

Our roadmap for training a CNN
Our roadmap for training a CNN

Step 1 is matrix multiplication! We’ll gradually refactor and accelerate our first, pure python, matrix multiplication, and in the process will learn about broadcasting and einstein summation. We’ll then use this to create a basic neural net forward pass, including a first look at how neural networks are initialized (a topic we’ll be going into in great depth in the coming lessons).

Broadcasting and einsum let us accelate matmul dramatically
Broadcasting and einsum let us accelate matmul dramatically

Then we will implement the backwards pass, including a brief refresher of the chain rule (which is really all the backwards pass is). We’ll then refactor the backwards path to make it more flexible and concise, and finally we’ll see how this translates to how PyTorch actually works.

Back propagation from scratch
Back propagation from scratch

Papers discussed

Lesson 9: Loss functions, optimizers, and the training loop

In the last lesson we had an outstanding question about PyTorch’s CNN default initialization. In order to answer it, I did a bit of research, and we start lesson 9 seeing how I went about that research, and what I learned. Students often ask “how do I do research”, so this is a nice little case study.

Then we do a deep dive into the training loop, and show how to make it concise and flexible. First we look briefly at loss functions and optimizers, including implementing softmax and cross-entropy loss (and the logsumexp trick). Then we create a simple training loop, and refactor it step by step to make it more concise and more flexible. In the process we’ll learn about nn.Parameter and nn.Module, and see how they work with nn.optim classes. We’ll also see how Dataset and DataLoader really work.

Once we have those basic pieces in place, we’ll look closely at some key building blocks of fastai: Callback, DataBunch, and Learner. We’ll see how they help, and how they’re implemented. Then we’ll start writing lots of callbacks to implement lots of new functionality and best practices!

Callbacks in the training loop
Callbacks in the training loop

Papers discussed

Lesson 10: Looking inside the model

In lesson 10 we start with a deeper dive into the underlying idea of callbacks and event handlers. We look at many different ways to implement callbacks in Python, and discuss their pros and cons. Then we do a quick review of some other important foundations:

  • __dunder__ special symbols in Python
  • How to navigate source code using your editor
  • Variance, standard deviation, covariance, and correlation
  • Softmax
  • Exceptions as control flow
Python's special methods let us create objects that behave like builtin ones
Python's special methods let us create objects that behave like builtin ones

Next up, we use the callback system we’ve created to set up CNN training on the GPU. This is where we start to see how flexible this sytem is—we’ll be creating many callbacks during this course.

Some of the callbacks we'll create in this course
Some of the callbacks we'll create in this course

Then we move on to the main topic of this lesson: looking inside the model to see how it behaves during training. To do so, we first need to learn about hooks in PyTorch, which allow us to add callbacks to the forward and backward passes. We will use hooks to track the changing distribution of our activations in each layer during training. By plotting this distributions, we can try to identify problems with our training.

An example temporal activation histogram
An example temporal activation histogram

In order to fix the problems we see, we try changing our activation function, and introducing batchnorm. We study the pros and cons of batchnorm, and note some areas where it performs poorly. Finally, we develop a new kind of normalization layer to overcome these problems, compare it to previously published approaches, and see some very encouraging results.

Papers discussed

Lesson 11: Data Block API, and generic optimizer

We start lesson 11 with a brief look at a smart and simple initialization technique called Layer-wise Sequential Unit Variance (LSUV). We implement it from scratch, and then use the methods introduced in the previous lesson to investigate the impact of this technique on our model training. It looks pretty good!

Then we look at one of the jewels of fastai: the Data Block API. We already saw how to use this API in part 1 of the course; but now we learn how to create it from scratch, and in the process we also will learn a lot about how to better use it and customize it. We’ll look closely at each step:

  • Get files: we’ll learn how os.scandir provides a highly optimized way to access the filesystem, and os.walk provides a powerful recursive tree walking abstraction on top of that
  • Transformations: we create a simple but powerful list and function composition to transform data on-the-fly
  • Split and label: we create flexible functions for each
  • DataBunch: we’ll see that DataBunch is a very simple container for our DataLoaders

Next up, we build a new StatefulOptimizer class, and show that nearly all optimizers used in modern deep learning training are just special cases of this one class. We use it to add weight decay, momentum, Adam, and LAMB optimizers, and take a look a detailed look at how momentum changes training.

The impact of varying momentum on a synthetic training example
The impact of varying momentum on a synthetic training example

Finally, we look at data augmentation, and benchmark various data augmentation techniques. We develop a new GPU-based data augmentation approach which we find speeds things up quite dramatically, and allows us to then add more sophisticated warp-based transformations.

Using GPU batch-level data augmentation provides big speedups
Using GPU batch-level data augmentation provides big speedups

Papers discussed

Lesson 12: Advanced training techniques; ULMFiT from scratch

We implement some really important training techniques in lesson 12, all using callbacks:

  • MixUp, a data augmentation technique that dramatically improves results, particularly when you have less data, or can train for a longer time
  • Label smoothing, which works particularly well with MixUp, and significantly improves results when you have noisy labels
  • Mixed precision training, which trains models around 3x faster in many situations.
An example of MixUp augmentation
An example of MixUp augmentation

We also implement xresnet, which is a tweaked version of the classic resnet architecture that provides substantial improvements. And, even more important, the development of it provides great insights into what makes an architecture work well.

Finally, we show how to implement ULMFiT from scratch, including building an LSTM RNN, and looking at the various steps necessary to process natural language data to allow it to be passed to a neural network.

ULMFiT
ULMFiT

Papers discussed

Lesson 13: Basics of Swift for Deep Learning

By the end of lesson 12, we’ve completed building much of the fastai library for Python from scratch. Next we repeat the process for Swift! The final two lessons are co-taught by Jeremy along with Chris Lattner, the original developer of Swift, and the lead of the Swift for TensorFlow project at Google Brain.

Swift code and Python code don't look all that different
Swift code and Python code don't look all that different

In this lesson, Chris explains what Swift is, and what it’s designed to do. He shares insights on its development history, and why he thinks it’s a great fit for deep learning and numeric programming more generally. He also provides some background on how Swift and TensorFlow fit together, both now and in the future. Next up, Chris shows a bit about using types to ensure your code has less errors, whilst letting Swift figure out most of your types for you. And he explains some of the key pieces of syntax we’ll need to get started.

Chris also explains what a compiler is, and how LLVM makes compiler development easier. Then he shows how we can actually access and change LLVM builtin types directly from Swift! Thanks to the compilation and language design, basic code runs very fast indeed - about 8000 times faster than Python in the simple example Chris showed in class.

Learning about the implementation of `float` in Swift
Learning about the implementation of `float` in Swift

Finally, we look at different ways of calculating matrix products in Swift, including using Swift for TensorFlow’s Tensor class.

Swift resources

Lesson 14: C interop; Protocols; Putting it all together

Today’s lesson starts with a discussion of the ways that Swift programmers will be able to write high performance GPU code in plain Swift. Chris Lattner discusses kernel fusion, XLA, and MLIR, which are exciting technologies coming soon to Swift programmers.

Then Jeremy talks about something that’s available right now: amazingly great C interop. He shows how to use this to quickly and easily get high performance code by interfacing with existing C libraries, using Sox audio processing, and VIPS and OpenCV image processing as complete working examples.

Behind the scenes of Swift's C interop
Behind the scenes of Swift's C interop

Next up, we implement the Data Block API in Swift! Well… actually in some ways it’s even better than the original Python version. We take advantage of an enormously powerful Swift feature: protocols (aka type classes).

Data blocks API in Swift!
Data blocks API in Swift!

We now have enough Swift knowledge to implement a complete fully connect network forward pass in Swift—so that’s what we do! Then we start looking at the backward pass, and use Swift’s optional reference semantics to replicate the PyTorch approach. But then we learn how to do the same thing in a more “Swifty” way, using value semantics to do the backward pass in a really concise and flexible manner.

Finally, we put it all together, implementing our generic optimizer, Learner, callbacks, etc, to train Imagenette from scratch! The final notebooks in Swift show how to build and use much of the fastai.vision library in Swift, even although in these two lessons there wasn’t time to cover everything. So be sure to study the notebooks to see lots more Swift tricks…

Further information

More lessons

We’ll be releasing even more lessons in the coming months and adding them to an attached course we’ll be calling Applications of Deep Learning. They’ll be linked from the Part 2 course page, so keep an eye out there. The first in this series will be a lesson about audio processing and audio models. I can’t wait to share it with you all!

Sneak peak at the forthcoming Audio lesson
Sneak peak at the forthcoming Audio lesson

A LaTeX add-in for PowerPoint - my father's day project

For creating presentations there’s a lot of features in PowerPoint that are hard to beat. So it’s not surprising that it’s a very popular tool—I see a lot of folks presenting PowerPoint presentations at machine learning talks that I attend. However, for equation-heavy academic publishing, many scientists prefer LaTeX. There are many reasons for this, but one key one is that LaTeX provides great support for creating equations. Whilst PowerPoint has an equation editor of its own, it is not a great match for LaTeX-using scientists, because:

  • It’s a pain to have to re-enter all your equations again into a new tool
  • The GUI approach takes a lot longer to enter equations compared to LaTeX (once you’ve learned LaTeX’s syntax). Although Microsoft Office equations have great keyboard support too, if you know where to look.

To avoid this problem, most scientists I’ve seen tend to copy screenshots from the LaTeX output of their papers, and paste them into PowerPoint. However this has it’s own problems, for instance:

  • The fonts are unlikely to match up correctly
  • It’s hard to resize the text to match the equation picture, and visa versa
  • The bitmap screenshot is low resolution, so doesn’t print well
  • The equations don’t reflow with the text, so have to be manually placed
  • Alignment commands don’t work, so alignment has to be done manually
  • …and so forth

If you’re one of those people looking to include LaTeX equations in PowerPoint, I’ve got some good news for you—have a look at this:

Real equations in PowerPoint, using LaTeX syntax
Real equations in PowerPoint, using LaTeX syntax

That’s right, this picture shows a real, editable, resizable, full-resolution equation in PowerPoint, created using LaTeX syntax! What’s the secret? Well… the secret is that Microsoft has actually included this functionality in PowerPoint for us, but they just totally butchered the front-end implementation, and failed to document it properly! So for my father’s day 2019 project, I created a little add-in to try to address that. Here’s how to use it.

How to use LaTeX in PowerPoint

To use LaTeX in PowerPoint you have to complete a few setup steps first. (I’ve only tested this on the latest Office 365 on Windows 10.)

  1. Download the latex PowerPoint add-in from here
  2. Put the add-in file somewhere convenient, and then add it to PowerPoint by clicking File then Options, clicking Add-ins in the options list on the left, then choose PowerPoint Add-ins from the Manage drop-down, and click Go. Choose Add New in the dialog box that pops up, and select the latex.ppam file you downloaded
  3. Click Enable Macros in the security notice that pops up.
The 'PowerPoint Add-ins' window
The 'PowerPoint Add-ins' window
The 'Security Notice' window
The 'Security Notice' window

You’ll now find that there’s a new LaTeX tab in your ribbon. Each time you open a new PowerPoint session you’ll need to switch it to “LaTeX mode”. To do so, click inside a text box (so the cursor is flashing) and choose Enable LaTeX in the LaTeX tab. This file will now be in LaTeX mode until you close and reopen PowerPoint. This is necessary to use the Input LaTeX button (see next paragraph), which is the only way I suggest to try to enter or edit LaTeX in PowerPoint.

Now you are ready to insert your equation. Click inside a text box, and ensure the cursor is at the end of the text box (currently the macro only works if you’re at the end of the selected text box). Now click Input LaTeX in the LaTeX tab, and paste your equation into the input box that pops up (you can also type into it, of course, although I’d suggest you type your LaTeX into a regular text editor and paste it to PowerPoint from there, so you have a convenient source for all your equations’ LaTeX source). That’s it! The equation is now a regular PowerPoint equation, so when you click inside it, everything is editable, and you can also select the equation and change its font size, color, etc.

You can even select the equation and add Wordart effects to it, if you want to really ham things up!…

Don't do this in polite company
Don't do this in polite company

Additional customization and tips

You can edit the equation using the normal Microsoft Office equation ribbon commands. If you want to see and edit the LaTeX source again, click Linear on the Equation ribbon. However, don’t edit this LaTeX directly in PowerPoint—it will mangle it as you type! Instead, copy it into an external editor and change it there, then create a new equation with the Input LaTeX command as above. (This is why it’s easier to simply keep all your original LaTeX source in a plain text file, if you’re not editing the equations using the equation ribbon.)

Apparently Microsoft hates productivity, or at least that’s the only reason I can think of that they decided to remove one of the most important features for productivity: the ability to customize and add keyboard shortcuts. So if you want to add a keyboard shortcut for Input LaTeX, you instead have to right-click on the Input LaTeX button in the ribbon, and choose Add to Quick Access Toolbar. You’ll now see an extra button in the very top left of your window (that’s the Quick Access Toolbar). Press and release Alt, and you’ll be able to see what numeric shortcut has been assigned to that button. Press and release Alt again to remove the shortcut overlays. Now you’re ready to use the keyboard shortcut. Click inside a textbox as before (at the end of it) and, while holding down Alt, press the number you noted down before. You should see the input box appear.

If you want to contribute improvements to the add-in, or just see how it works, head over to the latex-ppt repo. latex.pptm contains the macro, so you can edit it and try out your changes there. If you just want to see the (tiny amount) of code, I’ve popped it in the macros.bas file. My macros are very basic right now, so PRs with improvements and fixes are most welcome!

How this works

Microsoft have actually added all the necessary stuff to make LaTeX work in PowerPoint already. They’ve just not provided any UI for it, or documentation. And the editor doesn’t work. So I created a little add-in to automate the use of the features described below.

Microsoft Office supports a rather nifty plain text equation format called UnicodeMath, which used to be called Linear format. That’s what the PowerPoint ribbon still calls it, in fact. In the Equation ribbon you can click the Linear format button to type UnicodeMath directly. You can switch the linear format mode to LaTeX by typing Unicode character “Ⓣ” into an equation. Apparently that’s been in Microsoft Office for a while, but it’s only recently that the developer actually got around to writing it down. This post includes some additional useful information:

The LaTeX option supports all TeX control words appearing in Appendix B of the UnicodeMath spec. That includes many math operators, Greek letters, and various other symbols. The verbose LaTeX notations like \begin{equation} and \begin{matrix} aren’t supported, but the more concise TeX notations are supported, such as \matrix{…} and \pmatrix{…}. Fractions can be entered in the LaTeX form \frac{…}{…} or in the TeX form {…\over…}. \displaymath is implied if the math zone fills the hard/soft paragraph and currently it can’t be turned on in inline math zones. Unicode math alphanumerics can be entered using control words like \mathbf{}.

I hope you find this add-in and documentation useful! Many thanks to Murray Sargent of Microsoft who built the functionality in Office that this add-in uses.

Was this Google Executive deeply misinformed or lying in the New York Times?

YouTube has played a significant role in radicalizing people into conspiracy theories that promote white supremacy, anti-vaxxing, denial of mass shootings, climate change denial, and distrust of mainstream media, by aggressively recommending (and autoplaying) videos on these topics to people who weren’t even looking for them. YouTube recommendations account for 70% of time spent on the platform, and these recommendations disproportionately include harmful conspiracy theories. YouTube’s recommendation algorithm is trying to maximize watch time, and content that convinces you the rest of the media is lying will result in more time spent watching YouTube.

Given all this, you might expect that Google/YouTube takes these issues seriously and is working to address them. However, when the New York Times interviewed YouTube’s most senior product executive, Neal Mohan, he made a series of statements that, in my opinion, were highly misleading, perpetuated misconceptions, denied responsibility, and minimized an issue that has destroyed lives. Mohan has been a senior executive at Google for over 10 years and has 20 years of experience in the internet ad industry (which is Google/YouTube’s core business model). Google is well-known for carefully controlling its public image, yet Google has not issued any sort of retraction or correction of Mohan’s statements. Between Mohan’s expertise and Google’s control over its image, we can’t just dismiss this interview.

Headlines about YouTube from CNN, Newsweek, New York Times, & Bloomberg
Headlines about YouTube from CNN, Newsweek, New York Times, & Bloomberg

Radicalization via YouTube

Worldwide, people watch 1 billion hours of YouTube per day (yes, that says PER DAY). A large part of YouTube’s successs has been due to its recommendation system, in which a panel of recommended videos are shown to the user and the top video automatically begin playing once the previous video is over. This drives 70% of time spent on YouTube. Unfortunately, these recommendations are disproportionately for conspiracy theories promoting white supremacy, anti-vaxxing, denial of mass shootings, climate change denial, and denying the accuracy of mainstream media sources. “YouTube may be one of the most powerful radicalizing instruments of the 21st century,” Professor Zeynep Tufekci wrote in the New York Times. YouTube is owned by Google, which is earning billions of dollars by aggressively introducing vulnerable people to conspiracy theories, while the rest of society bears the externalized costs.

What is going on? YouTube’s algorithm was built to maximize how much time people spend watching YouTube, and conspiracy theorists watch significantly more YouTube than people who trust a variety of media sources. Unfortunately, a recommendation system trying only to maximize time spent on its own platform will incentivize content that tells you the rest of the media is lying, as explained by YouTube whistleblower Guillaume Chaslot.

What research has been done on this?

Guillaume Chaslot, who has a PhD in artificial intelligence and previously worked at Google on YouTube’s recommendation system, wrote software which does a YouTube search with a “seed” phrase (such as “Donald Trump”, “Michelle Obama”, or “is the earth round or flat?”), and records what video is “Up Next” as the top recommendation, and then follows what video is “Up Next” next after that, and so on. The software does this with no viewing history (so that the recommendations are not influenced by user preferences), and repeats this thousands of times.

Photo of Guillaume Chaslot from the Guardian
Photo of Guillaume Chaslot from the Guardian

Chaslot collected 8,000 videos from “Up Next” recommendations between August-November 2016: half came as part of the chain of recommendations after searching for “Clinton” and half after searching for “Trump”. When Guardian reporters analyzed the videos, they found that they were 6 times as likely to be anti-Hillary Clinton (regardless of whether the user had searched for “Trump” or “Clinton”), and that many contained wild conspiracy theories:

“There were dozens of clips stating Clinton had had a mental breakdown, reporting she had syphilis or Parkinson’s disease, accusing her of having secret sexual relationships, including with Yoko Ono. Many were even darker, fabricating the contents of WikiLeaks disclosures to make unfounded claims, accusing Clinton of involvement in murders or connecting her to satanic and paedophilic cults.”

This is just one of many themes that Chaslot has researched. Chaslot’s quantitative research on YouTube’s recommendations has been covered by The Wall Street Journal, NBC, MIT Tech Review, The Washington Post, Wired, and elsewhere.

In Feb 2018, Google Promised to Publish a Blog Post Refuting Chaslot (but still hasn't)

According to the Columbia Journalism Review, “When The Guardian wrote about Chaslot’s research, he says representatives from Google and YouTube criticized his methodology and tried to convince the news outlet not to do the story, and promising to publish a blog post refuting his claims. No such post was ever published. Google said it ‘strongly disagreed’ with the research—but after Senator Mark Warner raised concerns about YouTube promoting what he called ‘outrageous, salacious, and often fraudulent content,’ Google thanked The Guardian for doing the story.” (emphasis mine)

Why would Google claim that they had evidence refuting Chaslot’s research, and then never publish it? The Guardian story ran over a year ago, yet Google has still not produced their promised blog post. This suggests to me that Google was lying. It is important to keep this in mind when weighing the truthfulness of more recent claims by Google leaders regarding YouTube.

What did Neal Mohan get wrong?

YouTube’s Chief Product Officer, Neal Mohan, was interviewed in the New York Times, where he seemed to deny a well-documented phenomenon, ignored that 70% of time spent on the site comes from autoplaying recommendations (instead blaming users for what videos they choose to click on), made a nonsensical “both sides” argument (even though YouTube has extremist videos, they also have non-extremist videos…?), and perpetuated misconceptions (suggesting that since extremism isn’t an explicit input to the algorithm, that the results can’t be biased towards extremism). In general, his answers often seemed evasive, failing to answer the question that had been asked, and at no point did he seem to take responsibility for any mistakes or harms caused by YouTube.

Even the reporter interviewing Mohan seemed surprised, at one point interrrupting him to clarify, “Sorry, can I just interrupt you there for a second? Just let me be clear: You’re saying that there is no rabbit hole effect on YouTube?” (The “rabbit hole effect” is when the recommendation system gradually recommends videos that are more and more extreme). In response, Mohan blamed users and still failed to give a straightforward answer.

As background, Mohan first began working in the internet ad industry in 1997 at DoubleClick, which was aquired by Google for $3.1 billion in 2008. Mohan then served as SVP of display and video ads for Google for 7 years, before switching into the role of Chief Product Officer for Google’s YouTube. YouTube’s primary source of revenue is ads, and in 2018, YouTube was estimated to be doing $15 billion in annual sales and to be worth as much as $100 billion. Mohan is so beloved by Google that they offered him an additional $100 million in stock in 2013 to turn down a job offer from Twitter. All in all, this means that Mohan has 11 years of experience as a Google senior executive, and over 20 years of experience in the internet ad industry.

All the data, evidence, & research shows that extremism drives engagement and that YouTube promotes extremism.

“It is not the case that ‘extreme’ content drives a higher version of engagement or watch time than content of other types.”Neal Mohan

Unfortunately, any recommendation system trying only to maximize time spent on its own platform will incentivize content that tells you the rest of the media is lying. A 2012 Google blog post and a 2016 paper published by YouTube engineers both confirm this: the YouTube algorithm was designed to maximize watch time. Ex-YouTube engineer Guillaume Chaslot explains the dynamic in more detail here.

The issues of YouTube’s role in radicalization has been confirmed by investigations by the Wall Street Journal and the Guardian, quantitative research projects such as AlgoTransparency, and by 20 current and former YouTube employees. Five senior personnel who quit Google/YouTube in the last 2 years privately cited the failure of YouTube’s leadership to address false, incendiary, and toxic content as their reason for leaving. That Mohan would try to deny this seems as outlandish as many of the conspiracy theories promoted on YouTube.

According to a Bloomberg investigation, Google leaders have repeatedly rejected efforts of YouTube staff who sought to address or even just investigate the issue of false, incendiary, & toxic content “One employee wanted to flag troubling videos, which fell just short of the hate speech rules, and stop recommending them to viewers. Another wanted to track these videos in a spreadsheet to chart their popularity. A third, fretful of the spread of “alt-right” video bloggers, created an internal vertical that showed just how popular they were. Each time they got the same basic response: Don’t rock the boat… In February of 2018, a video calling the Parkland shooting victims “crisis actors” went viral on YouTube’s trending page. Policy staff suggested soon after limiting recommendations on the page to vetted news sources. YouTube management rejected the proposal.”

70% of YouTube views come from autoplaying its recommendations

“I’m not saying that a user couldn’t click on one of those videos that are quote-unquote more extreme, consume that and then get another set of recommendations and sort of keep moving in one path or the other. All I’m saying is that it’s not inevitable.”Neal Mohan

This statement ignores the way that YouTube’s autoplay works in conjunction with recommendations, which drives 70% of time that users spend on the site, according to a previous talk Neal Mohan gave. Yes, technically, it is not “inevitable”, but it is the mechanism driving 700 million hours of watched videos PER DAY (70% of 1 billion). Mohan’s statement suggests that users are choosing to click on extreme videos, whereas in most cases, videos are being selected by YouTube and automaticaly begin playing without any clicking required.

Case study: Alex Jones

To understand the role of YouTube’s autoplaying recommendations, it is crucial to understand the distinction between hosting content and promoting content. To illustrate with an example, YouTube recommended Infowars director Alex Jones 15,000,000,000 times (before banning him in August 2018). If you are not familiar with Alex Jones, the Southern Poverty Law Center rates him as the most prolific conspiracy theorist of contemporary times. One of his conspiracy theories is that the 2012 Sandy Hook Elementary School shooting, in which 20 children were murdered, was faked, and that the parents of the murdered children are lying. This has resulted in a years long harassment campaign against these grieving parents– many of them have had to move multiple times to try to evade harassment and one father recently committed suicide. Alex Jones also advocates for white supremacy, against vaccines, and that victims of the Parkland school shooting are “crisis actors”.

The issue is not that people were searching for Alex Jones videos; the issue is that YouTube recommended (and often began autoplaying) Alex Jones videos 15,000,000,000 times to people who weren’t even looking for them. More recently, YouTube has been aggressively promoting content from Russia Today (RT), a Russian state-owned propaganda outlet.

As computational propaganda expert Renee DiResta wrote for Wired, “There is no First Amendment right to amplification—and the algorithm is already deciding what you see. Content-based recommendation systems and collaborative filtering are never neutral; they are always ranking one video, pin, or group against another when they’re deciding what to show you.” Autoplaying conspiracy theories boosts YouTube’s revenue– as people are radicalized, they stop spending time on mainstream media outlets and spend more and more time on YouTube.

Algorithms can be biased on variables that aren’t part of the dataset.

“What I’m saying is that when a video is watched, you will see a number of videos that are then recommended. Some of those videos might have the perception of skewing in one direction or, you know, call it more extreme. There are other videos that skew in the opposite direction,” Mohan gave a vague “both sides” defense, although it is unclear how less extreme videos balance out more extreme videos. He continued, “And again, our systems are not doing this, because that’s not a signal that feeds into the recommendations.” Mohan is suggesting that since extremism is not an explicit variable that is fed into the algorithm, the algorithm can’t be biased towards extremist material. This is false, but a common and dangerous misconception.

Algorithms can be (and often are) biased on variables that are not a part of the dataset. In fact, this is what machine learning does: it picks out latent variables. For example, the COMPAS recidivism algorithm, used in many USA courtrooms as part of bail, sentencing, or parole decisions, was found to have nearly twice as high a false positive rate for Black defendents compared to white defendents. That is, 45% of Black defendents who were labeled as “high-risk” did not commit another crime; compared to 24% of white defendents. Race is not an input variable to this software, so by Mohan’s reasoning, there should be no problem.

Not only does ignoring factors like race, gender, or extremism not protect you from biased results, many machine learning experts recommend the opposite: you need to be measuring these quantities to ensure that you are not unjustly biased.

YouTube is using machine learning to pump pollution into society.  From my TEDx talk.
YouTube is using machine learning to pump pollution into society. From my TEDx talk.

Like many people around the world, I’m alarmed by the resurgence in white supremacist movements and continued denialism of climate change, and it sickens me to think how much money YouTube has earned by aggressively promoting such conspiracy theories to people who weren’t even looking for them. I spend most of my time studying AI ethics, and I have been including YouTube’s behavior as an example (of what not to do) in my keynote talks for the last 2 years. Even though I know the big tech companies won’t do the right thing unless forced to by meaningful regulation, I was still disheartened by this New York Times interview. Not only does Google/YouTube still not take these issues seriously, but it is insulting that they think the rest of us will be placated by their misleading corporate-speak and half-baked evasions.

Advice for Better Blog Posts

A blog is like a resume, only better. I’ve been invited to give keynote talks based on my posts, and I know of people for whom blog posts have led to job offers. I’ve encouraged people to start blogging in several of my previous posts, and I even required students in my computational linear algebra course to write a blog post (although they weren’t required to publish it), since good technical writing skills are useful in the workplace and in interviews. Also, explaining something you’ve learned to someone else is a way to cement your knowledge. I gave a list of tips for getting started with your first blog post previously, and I wanted to offer some more advanced advice here.

Who is your audience?

Advice that my speech coach gave me about preparing talks, which I think also applies to writing, is to choose one particular person that you can think of as your target audience. Be as specific as possible. It’s great if this is a real person (and it is totally fine if they are not actually going to read your post or attend your talk), although it doesn’t have to be (you just need to be extra-thorough in making up details about them if it’s not). Either way, what is their background? What sort of questions or misconceptions might they have about the topic? At various points, the person I’m thinking of has been a friend or colleague, one of my students, or my younger self.

Being unclear about your audience can lead to a muddled post: for instance, I’ve seen blog posts that contain both beginner material (e.g. defining what training and test sets are) as well as very advanced material (e.g. describing complex new architectures). Experts would be bored and beginners would get lost.

Dos and Don'ts

When you read other people’s blog posts, think about what works well. What did you like about it? And when you read blog posts that you don’t enjoy as much, think about why not? What would make the post more engaging for you? Note that not every post will appeal to every person. Part of having a target audience means that there are people who are not in your target audience, which is fine. And sometimes I’m not somebody else’s target audience. As with all advice, this is based on my personal experience and I’m sure that there are exceptions.

Things that often work well:

  • Bring together many useful resources (but don’t include everything! the value is in your curation)
  • Do provide motivation and context. If you are going to explain how an algorithm works, first give some examples of real-world applications where it is used, or how it is different from other options.
  • People are convinced by several different things: stories, statistics, research, and visuals. Try using a blend of these.
  • If you’re using a lot of code, try writing in a Jupyter notebook (which can be converted into a blog post) or a Kaggle Kernel.

Don’ts

  • Don’t reinvent the wheel. If you know of a great explanation of something elsewhere, link to it! Include a quote or one sentence summary about the resource you’re linking to.
  • Don’t try to build everything up from first principles. For example, if you want to explain the transformer architecture, don’t begin by defining machine learning. Who is your target audience? People already familiar with machine learning will lose interest, and those who are brand new to machine learning are probably not seeking out posts on the transformer architecture. You can assume that your reader already has a certain background (sometimes it is helpful to make this explicit).
  • Don’t be afraid to have an opinion. For example, TensorFlow (circa 2016, before eager execution) made me feel unintelligent, even though everyone else seemed to be saying how awesome it was. I was pretty nervous writing a blog post that said this, but a lot of people responded positively.
  • Don’t be too dull or dry. If people lose interest, they will stop reading, so you want to hook them (and keep them hooked!)
  • Don’t plagiarize. Always cite sources, and use quote marks around direct quotes. Do this even as you are first gathering sources and taking notes, so you don’t make a mistake later and forget which material is someone else’s. It is wrong to plagiarize the work of others and ultimately will hurt your reputation. Cite and link to people who have given you ideas.
  • Don’t be too general. You don’t have to cover everything on a topic– focus on the part that interests (or frustrates) you most.

Put the time in to do it well

As DeepMind researcher and University of Oxford PhD student Andrew Trask advised, “The secret to getting into the deep learning community is high quality blogging… Don’t just write something ok, either—take 3 or 4 full days on a post and try to make it as short and simple (yet complete) as possible.” Honestly, I’ve spent far more than 3 or 4 days on many of my most popular posts.

However, this doesn’t mean that you need to be a “naturally gifted” writer. I attended a poor, public high school in a small city in Texas, where I had few writing assignments and didn’t really learn to write a proper essay. An introductory English class my first semester of college highlighted how much I struggled with writing, and after that, I tried to avoid classes that would require much writing (part of the reason I studied math and computer science is that those were the only fields I knew of that involved minimal writing AND didn’t have lab sessions). It wasn’t until I was in my 30s and wanted to start blogging that I began to practice writing. I typically go through many, many drafts, and do lots of revisions. As with most things, skill is not innate; it is something you build through deliberate practice.

Note: I realize many people may not have time to blog– perhaps you are a parent, dealing with chronic illness, suffering burnout from a toxic job, or prefer to do other things in your free time– that’s alright! You can still have a successful career without blogging, this post is only for those who are interested.

Write a blog version of your academic paper

The top item on my wish list for AI researchers is that more of them would write blog posts to accompany their papers:

Far more people may read your blog post than will read an academic paper. This is a chance to get your message to a broader audience, in a more conversational and accessible format. You can and should link to your academic paper from your blog post, so there’s no need to worry about including all the technical details. People will read your paper if they want more detail!

Check out these excellent pairs of academic papers and blog posts for inspiration:

I usually advise new bloggers that your target audience could be you-6-months-ago. For grad students, you may need to change this to you-2-years-ago. Assume that unlike your paper reviewers, the reader of your blog post has not read the related research papers. Assume your audience is intelligent, but not in your subfield. What does it take to explain your research to a friend in a different field?

Getting Started with your first post

Here are some tips I’ve shared previously to help you start your first post:

  • Make a list of links to other blog posts, articles, or studies that you like, and write brief summaries or highlight what you particularly like about them. Part of my first blog post came from my making just such a list, because I couldn’t believe more people hadn’t read the posts and articles that I thought were awesome.
  • Summarize what you learned at a conference you attended, or in a class you are taking.
  • Any email you’ve written twice should be a blog post. Now, if I’m asked a question that I think someone else would also be interested in, I try to write it up.
  • You are best positioned to help people one step behind you. The material is still fresh in your mind. Many experts have forgotten what it was like to be a beginner (or an intermediate) and have forgotten why the topic is hard to understand when you first hear it.
  • What would have helped you a year ago? What would have helped you a week ago?
  • If you’re wondering about the actual logistics, Medium makes it super simple to get started. Another option is to use Jekyll and Github pages. I can personally recommend both, as I have 2 blogs and use one for each (my other blog is here).

Decrappification, DeOldification, and Super Resolution

We presented this work at the Facebook f8 conference. You can see this video of our talk here, or read on for more details and examples.

Decrappification, DeOldification, and Super Resolution

In this article we will introduce the idea of “decrappification”, a deep learning method implemented in fastai on PyTorch that can do some pretty amazing things, like… colorize classic black and white movies—even ones from back in the days of silent movies, like this:

The same approach can make your old family photos look like they were taken on a modern camera, and even improve the clarity of microscopy images taken with state of the art equipment at the Salk Institute, resulting in 300% more accurate cellular analysis.

The genesis of decrappify

Generative models are models that generate music, images, text, and other complex data types. In recent years generative models have advanced at an astonishing rate, largely due to deep learning, and particularly due to generative adversarial models (GANs). However, GANs are notoriously difficult to train, due to requiring a large amount of data, needing many GPUs and a lot of time to train, and being highly sensitive to minor hyperparameter changes.

fast.ai has been working in recent years towards making a range of models easier and faster to train, with a particular focus on using transfer learning. Transfer learning refers to pre-training a model using readily available data and quick and easy to calculate loss functions, and then fine-tuning that model for a task that may have fewer labels, or be more expensive to compute. This seemed like a potential solution to the GAN training problem, so in late 2018 fast.ai worked on a transfer learning technique for generative modeling.

The pre-trained model that fast.ai selected was this: Start with an image dataset and “crappify” the images, such as reducing the resolution, adding jpeg artifacts, and obscuring parts with random text. Then train a model to “decrappify” those images to return them to their original state. fast.ai started with a model that was pre-trained for ImageNet classification, and added a U-Net upsampling network, adding various modern tweaks to the regular U-Net. A simple fast loss function was initially used: mean squared pixel error. This U-Net could be trained in just a few minutes. Then, the loss function was replaced was a combination of other loss functions used in the generative modeling literature (more details in the f8 video) and trained for another couple of hours. The plan was then to finally add a GAN for the last few epochs - however it turned out that the results were so good that fast.ai ended up not using a GAN for the final models.

Low resolution jpeg image (left) upsampled with decrappify (right)
Low resolution jpeg image (left) upsampled with decrappify (right)

The genesis of DeOldify

DeOldify was developed at around the same time that fast.ai started looking at decrappification, and was designed to colorize black and white photos. Jason Antic watched the Spring 2018 fast.ai course that introduced GANs, U-Nets, and other techniques, and wondered about what would happen if they were combined for the purpose of colorization. Jason’s initial experiments with GANs were largely a failure, so he tried something else - the self-attention GAN (SAGAN). His ambition was to be able to successfully colorize real world old images with the noise, contrast, and brightness problems caused by film degradation. The model needed to be trained on photos with these problems simulated. To do this, he started with the images in the ImageNet dataset, converted them to b&w, and then added random contrast, brightness, and other changes. In other words, he was “crappifying” images too!

The results were amazing, and people all over the internet were talking about Jason’s new “DeOldify” program. Jeremy saw some of the early results and was excited to see that someone else was getting great results in image generation. He reached out to Jason to learn more. Jeremy and Jason soon realized that they were both using very similar techniques, but had both developed in some different directions too. So they decided to join forces and develop a decrappification process that included all of their best ideas.

'Migrant Mother' by Dorothea Lange (1936) colorized by DeOldify (right) and baseline algorithm (center)
'Migrant Mother' by Dorothea Lange (1936) colorized by DeOldify (right) and baseline algorithm (center)

The result of joining forces was a process that allowed GANs to be skipped entirely, and which can be trained on a gaming pc. All of Jason’s development was done on a Linux box in a dining room, and each experiment used only a single consumer GPU (a GeForce 1080Ti). The lack of impressive hardware and industrial resources didn’t prevent highly tangible progress. In fact, it probably encouraged it.

Jason then took the process even further, moving from still images to movies. He discovered that just a tiny bit of GAN fine-tuning on top of the process developed with fast.ai could create colorized movies in just a couple of hours, at a quality beyond any automated process that had been built before.

The genesis of microscopy super-resolution

Meanwhile, Uri Manor, Director of the Waitt Advanced Biophotonics Core (WABC) at the Salk Institute, was looking for ways to simultaneously improve the resolution, speed, and signal-to-noise of the images taken by the WABC’s state of the art ZEISS scanning electron and laser scanning confocal microscopes. These three parameters are notably in tension with one another - a variant of the so-called “triangle of compromise”, the bane of existence for all photographers and imaging scientists alike. The advanced microscopes at the WABC are heavily used by researchers at the Salk (as well as several neighboring institutions including Scripps and UCSD) to investigate the ultrastructural organization and dynamics of life, ranging anywhere from carbon capturing machines in plant tissues to synaptic connections in brain circuits to energy generating mitochondria in cancer cells and neurons. The scanning electron microscope is distinguished by its ability to serially slice and image an entire block of tissue, resulting in a 3-dimensional volumetric dataset at nearly nanometer resolution. The so-called “Airyscan” scanning confocal microscopes at the WABC boast a cutting-edge array of 32 hexagonally packed detectors that facilitate fluorescence imaging at nearly double the resolution of a normal microscope while also providing 8-fold sensitivity and speed.

Thanks to the Wicklow AI Medical Research Initiative (WAMRI), Jeremy Howard and Fred Monroe were able to visit Salk to see some of the amazing work done there, and discuss opportunities to use deep learning to help with some of Salk’s projects. Upon meeting Uri, it was immediately clear that fast.ai’s techniques would be a great fit for Uri’s needs for higher resolution microscopy. Fred, Uri, and a Salk-led team of scientists ranging from UCSD to UT-Austin, worked together to bring the methods into the microscopy domain, and the results were stunning. Using carefully acquired high resolution images for training, the group validated “generalized” models for super-resolution processing of electron and fluorescence microscope images, enabling faster imaging with higher throughput, lower sample damage, and smaller file sizes than ever reported. Since the models are able to restore images acquired on relatively low-cost microscopes, this model also presents an opportunity to “democratize” high resolution imaging to those not working at elite institutions that can afford the latest cutting edge instrumentation.

For creating microscopy movies, Fred used a different approach to the one Jason used for classic Hollywood movies. Taking inspiration from this blog post about stabilizing neural style transfer in video, he was able to add a “stability” measure to the loss function being used for creating single image super-resolution. This stability loss measure encourages images with stable features in the face of small amounts of random noise. Noise injection is already part of the process to create training sets at Salk anyways - so this was an easy modification. This stability when combined with information about the preceding and following frames of video significantly reduces flicker and improves the quality of output when processing low resolution movies. See more details about the process in the section below - “Notes on Creating Super-resolution Microscopy Videos”.

A deep dive into DeOldify

Let’s look at what’s going on behind the scenes of DeOldify in some detail. But first, here’s how you can use DeOldify yourself! The easiest way is to use these free Colab notebooks, that run you thru the whole process:

Image Colorization Notebook Video Colorization Notebook

Or you can download the code and run it locally from the GitHub repo.

Advances in the state of the art

The Zhang et al “Colorful Image Colorization” model is currently popular, widely used, and was previously state of the art. What follows are original black and white photos (left), along with comparisons between the “Colorful Image Colorization” model (middle), and the latest version of DeOldify (right). Notice that the people and objects in the DeOldify photos are colorized more consistently, accurately, and in greater detail. Often, the images that DeOldify produces can be considered nearly photorealistic.

'Thanksgiving Maskers' (1911)
'Thanksgiving Maskers' (1911)
'Gypsy Camp, Maryland' (1925)
'Gypsy Camp, Maryland' (1925)

Additionally, high quality video colorization in DeOldify is made possible by advances in training that greatly increase stability and quality of rendering, largely brought about by employing NoGAN training. The following clips illustrate just how well DeOldify not only colorizes video (even special effects!), but also maintains temporal consistency across frames.

The Design of DeOldify

There are a few key design decisions in DeOldify that make a significant impact on the quality of rendered images.

Self-Attention

One of the most important design choices of DeOldify is the use of self-attention, as implemented in the “Self-Attention Generative Adversarial Networks” (SAGAN) paper. The paper summarizes the motivation for using them:

“Traditional convolutional GANs generate high-resolution details as a function of only spatially local points in lower-resolution feature maps. In SAGAN, details can be generated using cues from all feature locations. Moreover, the discriminator can check that highly detailed features in distant portions of the image are consistent with each other.”

This same approach was adapted to the critic and U-Net based generators used in DeOldify. The motivation is simple: You want to have maximal continuity, consistency, and completeness in colorization, and self-attention is vital for this. This becomes a particularly apparent problem in models without self-attention when you attempt to colorize images containing large features such as ocean water. Often, you’ll see these large areas filled in with inconsistent coloration in such models.

Albert Einstein (1939)
Albert Einstein (1939)

Zhang et al “Colorful Image Colorization” model output is on the left, and DeOldify output is on the right. Notice that the water in the background, as well as the clothing of the man on the left, is much more consistently, completely, and correctly colorized in the DeOldify model. This is thanks in large part to self-attention, by which colorization decisions can easily take into account a more global context of features.

Self-attention also helps drive fantastic levels of detail in the colorizations.

'Woman relaxing in her living room' (1920, Sweden)
'Woman relaxing in her living room' (1920, Sweden)

Reliable Feature Detection

In contrast to other colorization models, DeOldify uses custom U-Nets with pretrained resnet backbones for each of its generator models. These are based on Fast.AI’s well-designed DynamicUnet, with a few minor modifications. The deviations from standard U-Nets include the aforementioned self-attention, as well as the addition of spectral normalization. These changes were modeled after the work done in the “Self-Attention Generative Adversarial Networks” paper.

The “video” and “stable” models use a resnet101 backbone and the decoder side emphasizes width (number of filters) over depth (number of layers). This configuration has proven to support the most stable and reliable renderings seen so far. In contrast, the “artistic” model has a resnet34 backbone and the decoder side emphasizes depth over width. This configuration is great for creating interesting colorizations and highly detailed renders, but at the cost of being more inconsistent in rendering than the “stable” and “video” models.

There are two primary underlying motivations for using a pretrained U-Net. First, it saves unnecessary training time that a large task in colorization is already trained for free- object recognition. That’s ImageNet-based object recognition, which for a single GPU will take at least a few days to train from scratch. Instead, we’re just fine-tuning that pretrained network to fit our task, which is much less work. Additionally, the U-Net architecture, especially Fast.AI’s DynamicUnet, is simply superior in image generation applications. This is due to key detail preserving and enhancing features like cross connections from encoder to decoder, learnable blur, and pixel shuffle. The resnet backbone itself is well-suited for the task of scene feature recognition.

To further encourage robustness in dealing with old and low quality images and film, we train with fairly extreme brightness and contrast augmentations. We’ve also employed gaussian noise augmentation in video model training in order to reduce model sensitivity to meaningless noise (grain) in film.

When feature recognition fails, jarring render failures such as “zombie hands” can result.

An example of 'zombie hand' rendering failure from the movie “Psycho” (1960).
An example of 'zombie hand' rendering failure from the movie “Psycho” (1960).

NoGAN Training

NoGAN is a new and exciting technique in GAN training that we developed, in pursuit of higher quality and more stable renders. How, and how well, it works is a bit surprising.

Here is the NoGAN training process:

  1. Pretrain the Generator. The generator is first trained in a more conventional and easier to control manner - with Perceptual Loss (aka Feature Loss) by itself. GAN training is not introduced yet. At this point you’re training the generator as best as you can in the easiest way possible. This takes up most of the time in NoGAN training. Keep in mind: this pretraining by itself will get the generator model far. Colorization will be well-trained as a task, albeit the colors will tend toward dull tones. Self-Attention will also be well-trained at the at this stage, which is very important.

  2. Save Generated Images From Pretrained Generator.

  3. Pretrain the Critic as a Binary Classifier. Much like in pretraining the generator, what we aim to achieve in this step is to get as much training as possible for the critic in a more “conventional” manner which is easier to control. And there’s nothing easier than a binary classifier! Here we’re training the critic as a binary classifier of real and fake images, with the fake images being those saved in the previous step. A helpful thing to keep in mind here is that you can simply use a pre-trained critic used for another image-to-image task and refine it. This has already been done for super-resolution, where the critic’s pretrained weights were loaded from that of a critic trained for colorization. All that is needed to make use of the pre-trained critic in this case is a little fine-tuning.

  4. Train Generator and Critic in (Almost) Normal GAN Setting. Quickly! This is the surprising part. It turns out that in this pretraining scenario, the critic will rapidly drive adjustments in the generator during GAN training. This happens during a narrow window of time before an “inflection point” of sorts is hit. After this point, there seems to be little to no benefit in training any further in this manner. In fact, if training is continued after this point, you’ll start seeing artifacts and glitches introduced in renderings.

In the case of DeOldify, training to this point requires iterating through only about 1% to 3% of ImageNet data (or roughly 2600 to 7800 iterations on a batch size of five). This amounts to just around 30-90 minutes of GAN training, which is in stark contrast to the three to five days of progressively-sized GAN training that was done previously. Surprisingly, during that short amount of training, the change in the quality of the renderings is dramatic. In fact, this makes up the entirety of GAN training for the video model. The “artistic” and “stable” models go one step further and repeat the the NoGAN training process steps 2-4 until there’s no more apparent benefit (around five repeats).

Note: a small but significant change to this GAN training that deviates from conventional GANs is the use of a loss threshold that must be met by the critic before generator training commences. Until then, the critic continues training to “catch up” in order to be able to provide the generator with constructive gradients. This catch up chiefly takes place at the beginning of GAN training which immediately follows generator and critic pretraining.

Progress of NoGAN training on a single video frame for colorization.
Progress of NoGAN training on a single video frame for colorization.

Note that the frame at 1.4% is considered to be just before the “inflection point” - after this point, artifacts and incorrect colorization start to be introduced. In this case, the actors’ skin becomes increasingly orange and oversaturated, which is undesirable. These were generated using a learning rate of 1e-5. The current video model of DeOldify was trained at a learning rate of 5e-6 to make it easier to find the “inflection point”.

Research on NoGAN training is still ongoing, so there are still quite a few questions to investigate. First, the technique seems to accommodate small batch sizes well. The video model was trained using a batch size of 5 (and the model uses batch normalization). However, there’s still the issue of artifacts and discoloration being introduced after the “inflection point”, and it’s suspected that this could be reduced or eliminated by mitigating batch size related issues. This could be done either by increasing batch size, or perhaps by using normalization that isn’t affected by batch size. It may very well be that once these issues are addressed, additional productive training can be accomplished before hitting a point of diminishing returns.

Another open question with NoGAN is how broadly applicable the approach is. It stands to reason that this should work well for most image-to-image tasks, and even perhaps non-image related training. However, this simply hasn’t yet been explored enough to draw strong conclusions. We did get interesting and impressive results on applying NoGAN to superresolution. In just fifteen minutes of direct GAN training (or 1500 iterations), the output of Fast.AI’s Part 1 Lesson 7 Feature Loss based super resolution training is noticeably sharpened (original lesson Jupyter notebook here).

Progress of NoGAN training on a single image for super resolution.
Progress of NoGAN training on a single image for super resolution.

Finally, the best practices for NoGAN training haven’t yet been fully explored. It’s worth mentioning again that the “artistic” and “stable” models were trained on not just one, but repeated cycles of NoGAN. What’s still unknown is just how many repeats are possible to still get a benefit, and how to make the training process less tedious by automatically detecting an early stopping point just before the “inflection point”. Right now, the determination of the inflection point is a manual process, and consists of a person visually assessing the generated images at model checkpoints. These checkpoints need to be saved at an interval of least every 0.1% of total data—otherwise, it is easily missed. This is definitely tedious, and prone to error.

How Stable Video is Achieved

The Problem – A Flickering Mess

Just a few months ago, the technology to create well-colorized video seemed to be out of reach. If you took the original DeOldify model and merely colorized each frame just as you would any other image, the result was this—a flickering mess:

'Metropolis' (1927), rendered using the initial release of DeOldify.
'Metropolis' (1927), rendered using the initial release of DeOldify.

The immediate solution that usually springs to mind is temporal modeling of some sort, whereby you enforce constraints on the model during training over a window of frames to keep colorization decisions more consistent. This would seem to make sense, but it does add a significant amount of complication, and the not-so-rare cases of changing scenes raises further questions about how to handle continuity. The rabbit hole deepens, and quickly. With these assumptions of needing temporal coherence enforced, the prospect of making seamless and flicker-free colorized video seemed quite far off. Luckily, it turns out these assumptions were wrong.

The Problems Melt Away with NoGAN

A surprising observation that we’ve made while developing DeOldify is that the colorization decisions are not at all arbitrary, even after GAN training and across different models. Rather, different models and training regimes keep arriving at almost the same solution, with only very minor variations in colorization. This even happens with the colorization of things you might expect to be unknowable and unconstrained by the luminance information in black and white photos: Clothing, cars, special effects in movies, etc. We’re not sure yet what exactly the model is learning to be able to more or less deterministically colorize images. But the bottom line is that temporal coherence concerns simply go away when you realize that there’s nothing to track—objects in frames will keep rendering the same regardless.

Additionally, it turns out that in NoGAN training the learning that takes place before the “inflection point” trains the generator in a very effective way. This is not only in terms of quickly achieving good colorization, but also without introducing artifacts, discoloration and inconsistency in generator renders. In other words, those artifacts and glitches in that “flickering mess” render of Metropolis above are coming from too much GAN training, and we can mitigate that pretty much completely with NoGAN!

'Metropolis' (1927), rendered using the new NoGAN-based release of DeOldify. Notice the much improved rendering situation!
'Metropolis' (1927), rendered using the new NoGAN-based release of DeOldify. Notice the much improved rendering situation!

NoGAN is the most significant element in achieving video render stability, but there’s a few additional design choices that also make an impact. For example, a larger resnet backbone (resnet101) makes a noticeable difference in how accurately and robustly features are detected, and therefore how consistently the frames are rendered as objects and scenes move. Another consideration is render resolution—increasing this makes a positive difference in some cases, but not nearly as big as one may expect. Most of the videos we’ve rendered have been done at resolutions ranging from 224px to 360px, and this tends to work just fine.

The end result of all this is that flicker-free, temporally consistent, and colorful video is achieved simply by rendering individual frame as if they were any other image! There is zero temporal modeling involved.

The DeOldify Graveyard: Things That Didn’t Work

For every design experiment that actually worked, there were at least ten that didn’t. This list is not exhaustive by any stretch but it includes what we consider to be particularly helpful to know.

Wasserstein GAN (WGAN) and Its Variants

The original approach attempted in the development of DeOldify was to base the architecture on Wasserstein GAN (WGAN). It turns out the stability of the WGAN and its subsequent improvements were not sufficient for practical application in colorization. Training would work somewhat for a while, but would invariably diverge from a stable solution as GANs are known to do. To an extent, the results were entertaining. What actually did wind up working extremely well (the first time even) was modeling DeOldify after Self-Attention Generative Adversarial Networks instead.

Early DeOldify model based on Wasserstein GAN. That didn’t work out so well, though the results did sometimes seem to have some artistic merit.
Early DeOldify model based on Wasserstein GAN. That didn’t work out so well, though the results did sometimes seem to have some artistic merit.

Various Other Normalization Schemes

The following normalization variations were attempted. None of them worked nearly as well as having batch normalization and spectral normalization combined in the generator, and just spectral normalization in the critic.

  • Spectral Normalization Only in Generator. This trained more slowly and was generally more unstable.
  • Batchnorm at Output of Generator. This slowed down training significantly and didn’t seem to provide any real benefit.
  • Weight Normalization in Generator. Ditto on the slowed training, and images didn’t turn out looking as good either. Interestingly however, it seems like weight normalization works the best when doing NoGAN training for super resolution.

Other Loss Functions

The interaction between the conventional (non-GAN) loss function and the GAN loss/training turns out to be crucial, yet tricky. For one, you have to weigh the non-GAN and GAN losses carefully, and it seems this can only come out of experimentation. The nice thing about NoGAN is that the iterations on this process are very quick relative to other GAN training regimes—it’s a matter of minutes instead of days to see the end result.

Another tricky aspect of the loss interaction is that you can’t just select any non-GAN loss function to work side by side with GAN loss. It turns out that Perceptual Loss (aka Feature Loss) empirically works best for this, especially in comparison to L1 and Mean Squared Error (MSE) loss.

It seems that since most of the emphasis in NoGAN training is in pretraining, it would be especially important to make sure that pretraining is taken as far as possible in rendering quality before making the switch to GAN. Perceptual Loss does just that—by itself, it produces the most colorful results of all the non-GAN losses attempted. In contrast, simpler loss functions such as MSE and L1 loss tend to produce dull colorizations as they encourage the networks to “play it safe” and bet on gray and brown by default.

Early DeOldify model using just Mean Squared Error (MSE) Loss. This loss fails to encourage interesting colorization.
Early DeOldify model using just Mean Squared Error (MSE) Loss. This loss fails to encourage interesting colorization.

Additions to perceptual loss were also attempted. Most notable were gram style loss and wasserstein distance. While the two cannot be ruled out and will be revisited in the future, the losses wound up encouraging strange orange and yellow discolorations when present in conjunction with GAN training. It’s suspected that the losses were simply not used effectively.

Reduced Number of Model Parameters

Something that tends to surprise people about DeOldify is the large model size. In the latest iteration, the “video” and “stable” models are set to a width of 1000 filters on the decoder side for most of the layers. The “artistic” model has the number of filters multiplied by 1.5 over the standard DynamicUnet configuration. Similarly, the critic is also rather hefty, with a starting width of 256 as opposed to the more conventional 64 or 128. Many experiments were done attempting to reduce the number of parameters, but they all generally ran into the same problem: The resulting renders were significantly less colorful.

Creating Super-resolution Microscopy Videos

Finally, we’ll discuss some of the details of the approach used at the Salk Institute for creating high resolution microsopy videos. The high level overview of the steps involved in creating model to produce high resolution microscopy videos from low resolution sequences is:

  • Acquire high resolution source material for training and low resolution material to be improved.
  • Develop a crappifier function
  • Create low res training dataset of image-tuples (groups of 3 images)
  • Create two training sets, A and B, by applying the crappifier to each image-tuple twice
  • Train the model on both training sets simultaneously with “stability” loss
  • Use the trained model to generate high resolution videos by running it across real low resolution source material

Acquisition of Source Material

At Salk we have been fortunate because we have produced decent results using only synthetic low resolution data. This is important because it is time consuming and rare to have perfectly aligned pairs of high resolution and low resolution images - and that would be even harder or impossible with video (live cells). In this case the files were acquired in proprietary czi format. Fortunately there is a python based tool for reading this format here.

Developing a Crappifier

In order to produce synthetic training data we need a “crappifier” function. This is a function that transforms a high resolution image into a low resolution image that approximates the real low resolution images we will be working with once our model is trained. The crappifier injects some randomness in the forms of both gaussian and poisson noise. These are both present in microscopy images. We were influenced in this design by the excellent work done by the CSBDeep team.

The crappifier can be simple but does materially impact both the quality and characteristics of output. For example, we found that if our crappifier injected too much high frequency noise into the training data, the trained model would have a tendency to eliminate thin and fine structures like those of neurons.

Generating the Synthetic Low Resolution Data for Training

The next step is to bundle sequences of images and their target together for training like this:

image alt text

This image shows an example from a training where we are using 5 sequential images ( t-2, t-1, t 0, t+1, t+2) - to predict a single super-resolution output image (also at time t 0 )

For the movies we used bundles of 3 images and predicted the high resolution image at the corresponding middle time. In other words, we predicted super-resolution at time t0 with low resolution images from times t-1, t 0 and t+1.

We chose 3 images because that conveniently allowed us to easily use pre-existing super-resolution network architectures, data loaders and loss functions that were written for 3 channels of input.

Creating a Second Set of Low Resolution Data

To use stability loss we actually have to apply the crappifier function to the source material twice in order to create two parallel datasets. Because the crappifier injects random noise - the two datasets will differ from each other slightly - but have the same high resolution targets. A perfectly stable model would predict identical high resolution output images from both training datasets - while ignoring the random noise.

Training the model with Stability Loss

In addition to the normal loss functions we would use for super-resolution, we need to choose a measure of stability loss. This is a measure of similarity of output generation when appy the model to each of the two training sets which as we explained previously differ only in their application of randomly applied noise.

Given the low resolution image sequence X that we will use to predict the true high resolution image T, we create X1 and X2 which result from to separate applications of the random noise generating crappifier function.

X1 = crappifier(X) and X2 = crappifier(X)

Given our trained model M, we then predict Y1 and Y2 as follows:

Y1 = M(X1) and Y2 = M(X2)

Giving us super resolutions L1 = loss(Y1, T) and L2 = loss(Y2,T). Our stability loss is the difference between the predicted images. We used L1 loss but you could also use a feature loss or some other approach to measure the difference:

LossStable = loss(Y1,Y2)

Our final training loss is therefore: loss = L1 + L2 + LossStable

Generating Movies

Now that we have a trained model, generating high resolution output from low resolution input is simply a matter of running the model across a sliding window of, in this case, three low resolution input images at a time. Imageio in one convenient way to write out multiimage tif files or mp4 files.

Examples:

Real low resolution input Single frame of input and conventional loss Multiple frames of input and stability loss penalty