Try image completion model from SIGGRAPH and it was great

The paper about image inpainting was presented to SIGGRAPH, and I give it a try since the actual model was published.

Github image

Paper: “Globally and Locally Consistent Image Completion”

Here this is the original paper presented to SIGGRAPH.

Take a look at the Github

This github repository was created by the author of the paper. It seems to be published from around February 2018.

We can download the trained model from this github.

I found that installing Lua, Torch and OpenCV is needed. I don’t want to mess up with current my local environment, so I decided to prepare it with Docker.

Build the environment with Docker

After I searched the base image to start with, I decided to use the docker image that installed OpenCV already. (But eventually I found OpenCV 3.1.0 is necessary so I install it again.)

This is Dockerfile I made.

FROM jjanzic/docker-python3-opencv

RUN apt-get install lua
    && curl -R -O \
    && tar zxf lua-5.3.4.tar.gz \
    && cd lua-5.3.4 \
    && make linux test \
    && make install

RUN apt-get install sudo

RUN cd \
    && git clone ~/torch --recursive \
    && cd ~/torch \
    && bash install-deps \
    && ./

RUN sudo apt-get install build-essential \
    && sudo apt-get install -y cmake git libgtk2.0-dev pkg-config libavcodec-dev libavformat-dev libswscale-dev \
    && sudo apt-get install -y python-dev python-numpy libtbb2 libtbb-dev libjpeg-dev libpng-dev libtiff-dev libjasper-dev libdc1394-22-dev

RUN wget \
    && mv 3.1.0.tar.gz opencv-3.1.0.tar.gz \
    && tar zxf opencv-3.1.0.tar.gz \
    && cd opencv-3.1.0 \
    && mkdir build \
    && cd build \
    && cmake -D CMAKE_BUILD_TYPE=Release -D CMAKE_INSTALL_PREFIX=/usr/local .. \
    && make -j7 \
    && make install

RUN luarocks install cv

It takes time to build, so I recommend to pull the docker image.

Pull Docker image

I already push my docker image to Docker Hub already, so please try to use this. (This is faster than build from scratch)

$ docker pull takp/torch-opencv:latest


First, git clone the original codes.

$ git clone
$ cd siggraph2017_inpainting

Then run the docker container from the docker image we just downloaded.

$ cd siggraph2017_inpainting
$ docker images
takp/torch-opencv    latest     76a6895xxxxx        9 hours ago         4.61GB
$ docker run -it -v `pwd`:/mount takp/torch-opencv  /bin/bash

From here, you need to run the commands inside the docker container.

$ cd /mount
$ bash
$ th inpaint.lua --input example.png --mask example_mask.png
  gpu : false
  mask : "example_mask.png"
  input : "example.png"
  nopostproc : false
  maxdim : 500
Loding model...	
Performing post-processing...	
libdc1394 error: Failed to initialize libdc1394

As you seee above, if the message Done. is displayed, it works fine! The output image out.png should be created.

Let’s try with example image.

(Original Image)

Original Example

(Input Image with blank space)

Input Example

(Output Image)

Output Example

Wow, the blank space is completed perfectly! The walking person was removed and this model generate the natural background.

When I see it in detail, I can see the small difference, but this is amazingly great.

Try with other images

When we don’t specify the mask image, it will generate the random blank spaces and complete them.

$ th inpaint.lua --input b.png

Example: Exteriror of cafe

(Original Image)

(Input Image with blank space)

Input 1

(Output Image)

Output 1

There is 2 blank spaces left side, and the larger blank space is completed very naturally same, almost same as the original. The smaller blank space is a little bit artificial, but I think it doesn’t make any problem when looking at the image as a whole.

Emaple: Toy Poodle (My dog)

(Original Image)

(Input Image with blank space)

Input 3

(Output Image)

Output 3

As you can see, the left eye is a bit collapsed but it still express the hairly texture well. It also generate the leash beautifully.

I’m very impressed with this! You can try any images you have, please try and see!