Reinforcement learning meetup

Reinforcement learning meetup

Yesterday, Berge hosted the Göteborg meetup on machine learning and data science (MLDS-GBG). Jesper Derehag from Ericsson joined us the guest speaker of the night. Jesper gave an introduction to reinforcement learning and talked about how DeepMind trained their AI to play Atari games on a (super)human level.

The event was visited by around 50 persons from the machine learning community of Göteborg. Students and professionals enjoyed an evening of discussions and mingling along with listening to the talk. From the Berge side, we are very pleased with the great interest in the event. Unfortunately, we could not host all who wanted so hopefully we will be able to do something bigger in the near future. It is great to see the community and the interest in machine learning growing rapidly in Göteborg!

Report: GPU Technology Conference Amsterdam

Report: GPU Technology Conference Amsterdam

In spring, we went to North America, but now in the autumn we get to stay around in our own part of the world. GTC Europe is a little sister of the main conference in Silicon Valley, but still very interesting and inspiring and therefore definitely worth a visit. It touches more than one of Berge's interests with its GPU oriented collection of speakers, exhibitors, and visitors. Jakob Andersson and Peter Karlsson made sure to spend the two days in Amsterdam absorbing all the news and trends.


Nvidia and the keynote

Enter Jen-Hsun Huang in his characteristic leather jacket. Follow two hours of CEO characteristic, product launching, super–hyper dripping, deep learning loving, and GPU hyping keynote!

GTC is Nvidia's conference and it shows, but it also adds a lot of fun with their full commitment to show off their products. The keynote mixed some product launches and partnership announcements with both impressive and some not so impressive deep learning demos. What is the value in predicting potential viewers to shared live video in real time? And what should Rembrandtising a video on the fly make us think of? Well, the future might tell.

The big surprise for us was the announced partnership with SAP to bring deep learning to enterprises around the world through Nvidia hardware and the SAP products. In hindsight, it seems a logical step due to all the data in their systems and all the benefits deep learning could bring to SAP's customers, but it surprised us a bit since we do not associate them with innovation, at least not in Sweden. However, it is never too late to change and we want to be the first to applaud trying out new things.

Nvidia's CEO Jen-Hsun Huang launching their extended Drive PX 2 range with AutoCruise, AutoChauffeur, and Full Autonomy on one architecture.

Automotive is a big focus of Nvidia today. They launched an extended Drive PX 2 range with smaller solutions for companies not targeting full automation. Xavier – their next-gen SOC introduced at the end of the keynote – will take further steps to adapt their products for autonomous drive by bringing down power consumption and raising systems safety capabilities (ASIL C claimed). They launched a partnership with TomTom on HD mapping, in the end also targeting autonomous vehicles. To show off the capabilities, Jen-Hsun talked at length about their own autonomous vehicle (more on that below) and their software platform called DriveWorks.

Everything else was of course not forgotten! VR got some attention, especially at the show where several demos – using the same HTC Vive we use at the Berge office – where setup to show what is happening in gaming and in industry. High-performance computing was the fourth area of focus at GTC this time. They showed the new Tesla P4 and P40 accelerators and their partnership with IBM to bring GPUs to data centres. Both VR and HPC had their own tracks at the conference, but focusing on deep learning during this visit, we were unable to visit most of them.

In the end, it is clear that Nvidia is committed to bringing the computational performance needed by deep learning, autonomous drive, and other advancements. And, of course, they should be! They have been given a massive opportunity with the new demands that their existing product, the GPU, is so well suited to solve. However, we are glad someone brings that computational power; that this year's ImageNet winner is four times deeper than Microsoft's ResNet that won 2015 show that we are going to need that power!


Deep learning and Artificial Intelligence

Dominik Grewe of Google DeepMind talking about AlphaGo and how they aim to solve intelligence in order to solve everything else.

Machine learning and artificial intelligence are growing fast through the deep learning revolution. Berge believes we will see disruption of many industries in the near future. Most visitors to GTC seem to agree (of course, selection bias!).

During the keynote, Nvidia showed statistics indicating that the number of deep learning developers has grown by a factor of 25 in the last two years. They did not show any source, but it is apparent that something big is happening.

On the fun side, it was great to listen to one of the big names in DL, Google DeepMind, and how they set out on a mission to first "solve intelligence" to then "solve everything else". Dominik Grewe walked us from how to solve Atari games to AlphaGo and its network architecture. Although not that applicable, games are a really good platform for trying out new things since you have full control and since they are designed to be challenging for humans. It will be interesting to follow them in the coming years!

Two talks that were more applicable came from Russian Yandex and a cooperation between DFKI and PwC. Yandex showed a deep learning solution to upload information about free parking spots to their map service by extracting that information from video data – either from dash cams common in Russia or from cell phones used for their navigation service. It seemed to work well so expect that to happen soon. As a fun side-story to show the power of their network these guys had mounted a camera on a bike when arriving in Amsterdam (city of bikes, right!) and after running that data through their algorithms it actually worked well even though the video was shaky, shot in a different angle, and from a new environment. DFKI, a German AI research centre, and PwC showed how to use deep learning to find potential fraudulent transaction in company-internal data. Their solution could flag transactions for further inspection by an auditor by finding outliers across the full data dimensions (compared to traditional approaches by looking at outliers in individual dimensions and thus finding mostly false-positives). This is really a field with commercial and societal interest, but it is hampered by two difficulties: (1) fraudsters are trying to hide their transactions and (2) the data is confidential so there exists no public dataset to experiment on or to compare algorithms on, c.f. ImageNet in object recognition. However, the results show that these difficulties can be surmounted even though it may take longer time.

On the practical side of deep learning, there was some Swedish participation through Martin Englund from the Facebook DevOps team. Martin talked about how Facebook have automated their research data centre to the extent that it identifies broken GPUs and schedule maintenance. On the scale Facebook operates, GPUs fail regularly so automating operations helps the deep learning engineers focus on developing new things. Impressive.

Marek Rosa made one of the most memorable appearances. He had always dreamt about solving general artificial intelligence so he started his own research company called GoodAI and financed it himself with money he had made developing games – just because he "did not have time for investors". As if that is a choice we all can make!


Autonomous drive

Autonomous driving seems to be on everyone’s agenda today and GTC Europe proved no different. Besides academic presence, Volvo Cars, Audi, and Renault represented the automotive industry and Nvidia represented the tech industry. It more and more seems like two sides in the race even if it is not as straightforward as tech vs. auto.

The classical engineering approach builds on the achievements of the last decade, such as adaptive cruise control and lane support functions, and adds additional sensors and redundant electronic architectures to widen the scope and diminishing failure rates. According to their talks, both Volvo and Renault favour this approach.

On the other side of the spectrum, we find end-2-end deep learning where raw sensor data is fed into a neural network tasked with outputting control commands. This is what Nvidia is doing with their research vehicle BB-8, which can be seen in the video to the right. BB-8 has learned the basic behaviour of staying in lane or on the road from seeing how humans drive. The net trained in BB-8 – called PilotNet – is used as a base behaviour along with other modules in Nvidia's autonomous drive OS DriveWorks.

In-between these extremes we see the modular approach where the problem is divided into parts that can be solved by either deep learning or traditional handcrafted algorithms. Audi presented their work on evaluating both this and end-2-end deep learning, and they provided positive reviews on the use of synthetic data for training neural networks on real world problems. That is very encouraging for our own strategies that aim in the very same direction.

GTC Europe did not shed any new insights on which side will be first to put autonomous vehicles (level 4) on the market for us consumers to enjoy. It is, however, clear that the different sides focus on slightly different questions, in line with their own strengths (or the focus is what creates those strengths). Volvo and Renault both stress the safety and liability aspects while Nvidia focused on what their car is able to do. In the end, one needs to provide both a capable AI driver and a safe electrical solution and at Berge we are sure the tech side have some tough nuts to crack (which we, of course, can help with) before being commercially viable, but we believe that the classical approach is a dead end with no hope of scaling to city traffic or to the rough traffic of other parts of the world. Sorry.


Embedded and virtual reality

The German Research Center for Artificial Intelligence, DFKI, brought their robot Mantis used for research on locomotion and manipulation behaviour methods for space exploring agents.

At the show, several companies displayed their solutions for embedding GPUs into products from drones and robots to military vehicles and space applications, many of them using the Jetson TX1 platform. Bringing GPUs from the consumer market to tough environments in military and space will require improvements in robustness and power efficiency that hopefully will benefit consumers as well, but that definitely will aid Nvidia's incursion into automotive.

That VR is making its way into business is clear. IKEA hosted an informative and funny talk about how they are using the technology today. Maybe you had missed it, but they actually released a VR kitchen simulator on Steam earlier this year. It is quite impressive and gives a much better feeling than visiting a store or browsing their web. A fun note is that they had as many downloads when launching an update adding meatballs as on launch! If you do not have HTC Vive, you can always come by our office and try out the simulator (otherwise as well, by the way).

IKEA use the Unreal Engine, which we also use in our simulators (e.g. this), and using game engines was actually a small theme across the conference. Several academics discussed virtual proving for automotive applications. Yandex said they use virtual data to enhance their network training. Google DeepMind is moving into teaching their AI to play 3D games. At Berge, we have discussed this for a time since it connects the strengths of our visualisation team with the strengths of our deep learning team. It is good to see others thinking along the same lines.


Link to the event: http://www.gputechconf.eu

Report: RE-WORK Deep learning summit

Report: RE-WORK Deep learning summit

As the sun made its final attempt to heat up the Londoners, Arvid Nilsson and Jonas Karlsson from the Deep learning team at Berge flew over the North Sea to listen to discussions on this popular topic in the British capital. Here are their reflections on the two days of 'Deep learning summit', arranged by RE-WORK on September 22-23!


Data, data, data

The issues regarding lack of and trouble accessing data was a recurring topic throughout the summit. A number of different methods to overcome this issue was discussed. Raia Hadsell from Google DeepMind talked about continuous learning and especially how progressive neural networks could be helpful when data is of short supply and when one would like to further improve training time when new networks were trained.

The key idea behind these networks is the utilization of a concept called transfer learning. First you train a neural network for a specific task and then you reuse that network when you are training a new network for a new task. This is done by feeding the new input data to the old network and then feed the data back from the old network via adaptor blocks. During training the weights in the old network is frozen and only the weights in the new net and the adaptor blocks are being updated. This could be a very nice approach for training neural networks on synthetic data in a simulation and then utilize that network in a new network trained in a real world environment.

Deep learning in Health care

A lot of focus on the second day of the summit was on health care. Jeffrey de Fauw talked about how to detect diabetic retinopathy with convolutional neural networks. These kinds of networks could potentially be of great help in the future of medical diagnostics. Another topic in the field of health service was about how we can utilize AI to predict if a novel drug would be safe and efficient before even trying it out.

Ali Parsa from Babylon Health gave a very inspiring talk about personalized health service. He spoke of how we can make the health service more accessible for people in poor areas and how health service can be more efficient and of higher quality in general.

Self-driving cars and robotics

Magnus Posner from the University of Oxford talked about the future in self driving vehicles and how deep learning is applied in that field. He mentioned some very interesting concepts about how to learn a reward function for reinforcement learning by evaluating data from vehicles driven by people.

Deep tracking technology was presented by Peter Ondruska, also he from the University of Oxford. Peter presented how neural nets can be trained to very effectively track objects from raw laser data. He also showed some very impressive 3D reconstruction networks.

Startups

A number of different startups were talking about how they apply deep learning in their applications. Two examples worth mentioning are Tractable and AI Build. Tractable has built a tool to significantly lower the time it takes to label images and AI Build uses deep learning to make 3D printing adaptable during printing. They used a vision system to evaluate the height and structure of the surface on the go.

Deep learning in chatbots

The chatbot track in this years RE-WORK was interesting to follow and here are some notes of relevance:

  • Consensus is that the demand and the market is ready but the technology is not quite there yet.
  • Tay from Microsoft got quite a bit of bashing.
  • Everybody is starting out with wit.ai (acquired by Facebook in January) or API.AI (acquired by Google late September) but most developers move on to creating their own framework due to scaling issues.
  • A couple of shameless plugs but in general great presentations and really interesting applications. (Tina the T-Rex, admitHub, your.MD, etc.)

How to install Tensorflow with GPU support on a machine with Ubuntu 14.04 LTS

How to install Tensorflow with GPU support on a machine with Ubuntu 14.04 LTS

Jonas wrote a guide to taming your dual-GPU setup in Ubuntu and we wanted to share it. Enjoy! 

I struggled with this installation for some time and this is a summary of things to keep in mind when doing the installation and in general a guide to be able to re-do what I did in much less time hopefully!

In general Nvidia has many good guides that show how you should go about preparing the GPU support to TensorFlow and the actual TensorFlow installation itself. So I very much recommend their forum and guides (even better than the guides at the homepage of TensorFlow).

At this point of time I installed Cuda 7.5 and Cudnn 5.1. The installation was done at a laptop with a Geforce GTX 950M graphics card. The laptop also has an integrated GPU. Having multiple GPU:s and using a laptop with optimus etc probably adds a bit to the hazzle.

I will state the guides that I followed but I will also note what I found important to do DIFFERENT in the guide. So look closely for those notes.

OK, here we go:

First of all I hope that you have all of your documents in a cloud service of some sort and that all of your current work is pushed to a repo. The easiest solution when things break when doing this is sometimes to just reinstall Ubuntu.

Make a bootable usb stick with your preferred Linux distro, so you easily can reinstall it. There is a nice tool in Ubuntu called “Startup disc creator” that handles this really nicely.

To have another computer beside the computer that you are doing the installation on is really nice to have. Because you probably need to download/google/check something. This can be hard if you are working in the console.

Make sure that your computer doesn’t have secure boot on. This can be found in BIOS -> Security -> Secure boot. Set secure boot to disabled. If you already have Linux installed it’s not certain that you still can log in anymore when changing the secure boot option. So just reinstall Linux. If possible, don’t have a dual boot system. It sometimes makes it a bit more unstable and one tend to make the Linux partition to small(I did). Go all in Linux! :)

We are going to do the different installations in the following order:

  1. Install GPU drivers with NO opengl support.

  2. Install cuda + cudnn with NO drivers and NO opengl support.

  3. Build and install Tensorflow.  

  4. Add a couple of path’s.

  5. Test it!

1. Driver installation

  1. Go the the Nvidia’s driver homepage and download the appropriate package for your gpu. http://www.nvidia.com/Download/index.aspx?lang=en-us

  2. Open a terminal: ctrl + alt + t

  3. Run: $sudo apt-get install build-essential

  4. Create the /etc/modprobe.d/blacklist-nouveau.conf file with :
    blacklist nouveau
    option nouveau modeset=0

Save and close the file.

Then run: $sudo update-initramfs -u

  1. Reboot computer

  2. Press ctrl + alt+ F1 to enter the console.

  3. Go to the directory where your driver is located and run: $chmod a+x .

  4. Now, run $ sudo service lightdm stop

  5. Install the drivers by: $sudo bash NVIDIA-Linux-x86_64-367.44.run --no-opengl-files

  6. You might get an error during the installation but continue if possible.

  7. Installation should be complete. Now check if device nodes are present:

  8. Check if /dev/nvidia* files exist. If they don't, do:
    $ sudo modprobe nvidia

  9. Reboot computer

2. Cuda installation

Follow this forum thread to do the installation of cuda:

https://devtalk.nvidia.com/default/topic/878117/cuda-setup-and-installation/-solved-titan-x-for-cuda-7-5-login-loop-error-ubuntu-14-04-/

Check out the replay in the end of the thread. The beginning of this replay is what has been copied and written in “1. Driver installation”. So when doing this make sure to answer NO when the installation asks if you want to do the driver installation part of the cuda installation.

Cuda can be downloaded here:
https://developer.nvidia.com/cuda-toolkit

Make sure to use the appropriate version of cuda. Checkout tensorflow homepage to see what is supported:
https://www.tensorflow.org/versions/r0.10/get_started/os_setup.html

Cudnn can be downloaded here:
https://developer.nvidia.com/cudnn

Unfortunately you need a login to get the cudnn drivers. Also checkout the version to be used at TensorFlow homepage.

3. Tensorflow installation

Make sure that you have python up and running as you want. Consider using anaconda or virtual environment perhaps. Or just make sure to install at least numpy and scipy. Some other packages might be needed also. What is missing in python will be highlighted with an error when building the tensorflow package.

I used python 2.7.

Follow this guide:
http://www.nvidia.com/object/gpu-accelerated-applications-tensorflow-installation.html

4. Add path’s

Add this to .bash_profile or .bashrc depending on what you have and what you are using:
$ export PATH=/usr/local/cuda/bin:$PATH
$ export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH

Simply run $sudo nano ~/.bashrc
OR $ sudo nano ~/.bash_profile

Add the content, save and quit.

Restart the terminal.

5. Test

Do the test according to the guide in section 3. You might already have tried this with an error, saying that it couldn’t find the some cudn-file(due to missing paths). Try again!

Happy deep learning! Send me an e-mail if you want help!

/Jonas Karlsson, jonas.karlsson@berge.io

Machine learning meetup @ Berge

Machine learning meetup @ Berge

We believe machine learning is about to change the world and at Berge we want to be part of that journey. Practitioners, researchers, and students from Göteborg joined us for an open-space discussion on everything from scientificity to what is happening in Göteborg today.

If you are curious about machine learning in Gothenburg, join the community.