Archive for the ‘general’ Category

Error “OpenBLAS Warning : Detect OpenMP Loop and this application may hang. Please rebuild the library with USE_OPENMP=1 option”

Wednesday, September 27th, 2017

Running Torch on CPU mode, I got ”

OpenBLAS Warning : Detect OpenMP Loop and this application may hang. Please rebuild the library with USE_OPENMP=1 option

“. The code didn’t hang it just kept printing the error.

Go to the folder the OpenBLAS repository (/path/to/OpenBLAS)

cd /path/to/OpenBLAS/
make clean
make USE_OPENMP=1
sudo make install

If you have not already done so,

sudo vi /etc/ld.so.conf.d/openblas.conf

add: /opt/OpenBLAS/lib

then

sudo ldconfig

How to know webcam speed FPS?

Monday, September 4th, 2017

Just run the following command Ubuntu

v4l2-ctl –list-formats-ext

Output:

ioctl: VIDIOC_ENUM_FMT
Index : 0
Type : Video Capture
Pixel Format: ‘YUYV’
Name : YUYV 4:2:2
Size: Discrete 640×360
Interval: Discrete 0.033s (30.000 fps)
Interval: Discrete 0.040s (25.000 fps)
Interval: Discrete 0.050s (20.000 fps)
Interval: Discrete 0.067s (15.000 fps)
Interval: Discrete 0.100s (10.000 fps)
Interval: Discrete 0.200s (5.000 fps)
Size: Discrete 320×240
Interval: Discrete 0.033s (30.000 fps)
Interval: Discrete 0.040s (25.000 fps)
Interval: Discrete 0.050s (20.000 fps)
Interval: Discrete 0.067s (15.000 fps)
Interval: Discrete 0.100s (10.000 fps)
Interval: Discrete 0.200s (5.000 fps)
Size: Discrete 640×480
Interval: Discrete 0.033s (30.000 fps)
Interval: Discrete 0.040s (25.000 fps)
Interval: Discrete 0.050s (20.000 fps)
Interval: Discrete 0.067s (15.000 fps)
Interval: Discrete 0.100s (10.000 fps)
Interval: Discrete 0.200s (5.000 fps)
Size: Discrete 960×720
Interval: Discrete 0.067s (15.000 fps)
Interval: Discrete 0.100s (10.000 fps)
Interval: Discrete 0.200s (5.000 fps)
Size: Discrete 1280×720
Interval: Discrete 0.100s (10.000 fps)
Interval: Discrete 0.200s (5.000 fps)

Index : 1
Type : Video Capture
Pixel Format: ‘MJPG’ (compressed)
Name : Motion-JPEG
Size: Discrete 640×360
Interval: Discrete 0.033s (30.000 fps)
Interval: Discrete 0.040s (25.000 fps)
Interval: Discrete 0.050s (20.000 fps)
Interval: Discrete 0.067s (15.000 fps)
Interval: Discrete 0.100s (10.000 fps)
Interval: Discrete 0.200s (5.000 fps)
Size: Discrete 320×240
Interval: Discrete 0.033s (30.000 fps)
Interval: Discrete 0.040s (25.000 fps)
Interval: Discrete 0.050s (20.000 fps)
Interval: Discrete 0.067s (15.000 fps)
Interval: Discrete 0.100s (10.000 fps)
Interval: Discrete 0.200s (5.000 fps)
Size: Discrete 640×480
Interval: Discrete 0.033s (30.000 fps)
Interval: Discrete 0.040s (25.000 fps)
Interval: Discrete 0.050s (20.000 fps)
Interval: Discrete 0.067s (15.000 fps)
Interval: Discrete 0.100s (10.000 fps)
Interval: Discrete 0.200s (5.000 fps)
Size: Discrete 960×720
Interval: Discrete 0.033s (30.000 fps)
Interval: Discrete 0.040s (25.000 fps)
Interval: Discrete 0.050s (20.000 fps)
Interval: Discrete 0.067s (15.000 fps)
Interval: Discrete 0.100s (10.000 fps)
Interval: Discrete 0.200s (5.000 fps)
Size: Discrete 1280×720
Interval: Discrete 0.033s (30.000 fps)
Interval: Discrete 0.040s (25.000 fps)
Interval: Discrete 0.050s (20.000 fps)
Interval: Discrete 0.067s (15.000 fps)
Interval: Discrete 0.100s (10.000 fps)
Interval: Discrete 0.200s (5.000 fps)

 

We Demand More Like Andrew Ng!

Wednesday, August 9th, 2017

Just learned Andrew Ng will start a deep learning coursera class . I’m very excited to hear it. From my shallow experience and knowledge in industry and academia, we as a healthy society need or should demand such serious education effort.

Thanks to the huge success from Fei-Fei Li’s ImageNet, the deep learning that Jeff Hinton visioned and started, and carried on by Joshua Bengio and Yann LeCunn, made household known by DeepMind’s AlphaGo. Now almost everywhere people talk about machine learning, artificial intelligence, deep learning, etc. In particular, industry across fields has been embracing such technology like never before. All the sudden, small startups of a few people with a few know-hows get acquired, the speed of such acquisition is also unprecedented.  Money, vanity, etc fly high like all the old shows/bubbles we have seen in the past decades. Yet, I don’t think it is a bubble. Deep learning is for real and will revolutionize many industries, unleash human potential just like what electricity did on our humanity. All in all, there are a few cloud that could upend such prosperity.

The first is there is huge disparity between what industries need and what the talent could offer. Like Thomas Freeman once said(loosely cite here), breakthrough is what desperately needed meet all the sudden available. The human society need a long overdue efficiency upgrade, especially after internet. In 2012, deep learning came out seemingly nowhere. All the sudden many things seemingly only human can solve were solved by this mysterious thing called deep learning. The talents became the blood/resource that everybody want. Yet, in large part, the whole academia world wasn’t there for the nurturing and birth of deep learning of course can’t produce enough such talents. But demands just keep piling up. So we end up with lots of expedite “talents”. Among few real talents, there are a lot of less well-trained, who were exposed to too cozy environment/problems. Undoubtedly many of them will not be able to live up the industry demands in many ways. Sooner or later, disappointments will be everyone’s focus.

The second is that academia really don’t have time to prepare the whole field with strong and solid theory foundation. We still don’t know why deep learning works, there are some educated guesses, but a clear explanation is absent. I haven’t seen any great advancement sustain ups and downs with no mathematics foundation.

I should stop here, too much nagging. What I want to say is that we as a society should demand more people like Andrew Ng and even beyond. Things happen for a reason and exist for reasons too, our society need demand more to advance humanity.

nvidia_346_uvm error in Caffe on AWS g2 instance

Tuesday, July 26th, 2016

I got the following error after rebooted my G2 instance. The code is in Python and prior to the reboot, it works fine.

modprobe: ERROR: ../libkmod/libkmod-module.c:809 kmod_module_insert_module() could not find module by name=’nvidia_346_uvm’

modprobe: ERROR: could not insert ‘nvidia_346_uvm’: Function not implemented

I googled around and found no direct solution so took a stab myself by doing:

sudo apt-get remove nvidia-346-uvm

and then reboot.

Surprise. It works!

Still don’t know why it works.

Simple But Surprisingly Good Clustering “Algorithm”

Sunday, January 17th, 2016

This is one of those jaw dropping papers. Density Clustering, astonishingly simple and yet phonemically performance. No need to put it into math and hardly call it an “algorithm” (truly an algorithm).  Here is how it works:

Given a distance/similarity matrix (pairwise distance/similarity for all data points) and a cutoff distance/similarity.

  1. For every point, count how many other points are within the cutoff distance. The count is the density of the current point.
  2. For every point, find all other points having a higher density. Among those with higher density, find the smallest distance, and use it as the current point’s distance.
  3. Plot density vs distance.
  4. All the outliers in the plot are cluster centers.
  5. Assign each point the cluster of its nearest neighbor.

No any fancy math, and it seems work. It has R library too. Well done.

Spark Error: Too many open files

Monday, December 22nd, 2014

This is a typical Spark error happens on Ubuntu (probably other Linux versions too). To resolve it, one could do the following:

Change this file /etc/security/limits.conf

to add:

* soft nofile 55000
* hard nofile 55000

55000 is the one I use for example, you could choose larger or smaller number, it means, we allow the system to handle open as many as 55000 files.

After save the changes, you will need to REBOOT to make it effective.

One note is that, I recommend don’t go crazy on this number, for example, I once put it 1,000,000 and Spark generated too many temporary files and caused my harddisk very hard time deleting those temporary files.

Python+Redis

Wednesday, June 18th, 2014

 

 

 

I was using the dictionary function in Python to manage my database, which turned out a disaster. Well, I shouldn’t have tried it in the first place. It was painfully slow if more than 10,000 data entries. Redis, on the other hand is an in-memory database and gains flame quite dramatically and quickly among tech followers. So I gave it a try. Here is the time consume on my Mac.
RedisPerformanceThe x-axis is the number of data entries. The y-axis is time consumed by redis+python to store those data entries, its unit is second.

For the record, I indeed managed using dictionary() to store about 10,000 data entries. But after waited so long, I decided to abandon it entirely. It is safe to say it is out of this chart.

 

Process the wikipedia dump data

Tuesday, May 6th, 2014

The entire wikipedia data can be downloaded from here.

In order to get the articles, one way is to use the wikiprep code, which is written in Perl, my ex-favorite language. I ran into problems when tried to run it after installation. For example, when ran wikiprep, the output on screen is:

Can’t locate Log/Handler.pm in @INC (@INC contains: /Library/Perl/5.16/darwin-thread-multi-2level /Library/Perl/5.16 /Network/Library/Perl/5.16/darwin-thread-multi-2level /Network/Library/Perl/5.16 /Library/Perl/Updates/5.16.2/darwin-thread-multi-2level /Library/Perl/Updates/5.16.2 /System/Library/Perl/5.16/darwin-thread-multi-2level /System/Library/Perl/5.16 /System/Library/Perl/Extras/5.16/darwin-thread-multi-2level /System/Library/Perl/Extras/5.16 .) at /usr/local/bin/wikiprep line 40.

BEGIN failed–compilation aborted at /usr/local/bin/wikiprep line 40.

To solve this problem, after several tries and errors and Google searches, the solution is to install whatever missed module, here is “Log::Handler”. So I ran

sudo cpanm Log::Handler

Well, a note is that I installed cpanm already. And using cpanm installing the missed module made the problem goes away and now I’m running wikiprep to get the actual articles out of the dump with such a command:

wikiprep -format composite -compress -f ../enwiki-20140402-pages-articles-multistream.xml.bz2 > out

Port binding in DigitalOcean Ubuntu

Sunday, February 23rd, 2014

I had trouble with binding port 80 in DigitalOcean Ubuntu and was rescued by this page on stackoverflow. Following the page, one needs is :

sudo iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 3000

Then edit the file "/etc/rc.local”, notice that it is NOT other file suggested on the webpage. Editing means add the above command with small modification:

iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 3000

So far, it solved my problem and everything is working.

 

Limit RAM size used by redis

Tuesday, February 18th, 2014

If you want to limit the max size of RAM that is allocated to redis, you need change the file named “redis.conf” under the path where redis is installed. The trick is that when you run “./redis-server”, you have to run it from where the “redis.conf” file is. Unless you specifically call that modified “redis.conf” when start “redis-server”, it won’t use the modified file at all. What I did, for example, allocate only 100MB RAM to redis:

Just add one line: “maxmemory 100M” into “redis.conf”. Then start, “./src/redis-server”. It should work. But be aware of possible consequences of such restriction, “out of memory error” might come uninvitedly…