Travis CI SciPy requirements.txt

I have noticed that currently Travis CI has SciPy 0.9.0. That’s fine for most of my things (except savgol_filter which is new in 0.14.0)

When I put SciPy>=0.9.0 in requirements.txt, even though Travis gets SciPy 0.9.0 from
apt-get install scipy
Travis still tries to pip install SciPy latest version.

It’s been suggested by many to just use MiniConda with some boilerplate in .travis.yml. You can see what I’m currently using

Matplotlib ValueError on LogNorm plots

I was getting the error

ValueError: Data has no positive values, and therefore can not be log-scaled.

The issue is that I was setting vmin=0 in my pcolormesh() plot. By setting vmin=1 or some small positive value, your plots will work with norm=LogNorm() as expected.

Speed of Matlab vs. Python Numpy Numba

Here is a comparison on my Intel i7-2600 Sandy Bridge (3 year old) desktop PC.

Python 3.4.2, Anaconda 2.1, iPython 2.2.0, Numpy 1.8.2 with Intel MKL

import numpy as np
A = np.matrix(np.random.randn(5000,5000))
B = np.matrix(np.random.randn(5000,5000))
%timeit A*B
1 loops, best of 3: 2.51 s per loop

Matlab R2014b, also with Intel MKL
A = randn(5000,5000);
B = randn(5000,5000);
f = @() A*B;
timeit(f)
ans =
5.1059

So, Numpy is about twice as fast as Matlab at this matrix multiplication.
————————————————-
example 2: Using Numba in iterative algorithms.

Python
from numba import jit # inline auto compilation to C of Python code
from time import time
from numpy import diff
@jit
def f(): # declare a function
    x=0
    for i in range(int(1e7)): #generator much faster than arange here!
        x = 0.5*x + i % 10
tic = time()
f()
print('elapsed time (sec)',time()-tic)
elapsed time (sec) 0.07639932632446289

Matlab R2014b
tic, x = 0; for i = 0:1e7-1; x = 0.5*x + mod(i,10); end, toc
Elapsed time is 0.608442 seconds.

Python is 7.96 times faster than Matlab for this trivial test.
You can also find plenty of examples where Python is somewhat slower than Matlab. For me the places where Python was much faster seemed to very much outweigh the slower places.

Python: Numba 0.15.1 has bug regression: doesn’t like “is not”

Update: this has been patched I’m waiting for the next release of Numba after 0.15.1.

————-

In trying to write idiomatic Python, I use “None” like many people are taught to use NaN in languages such as Matlab–to indicate non-execution of command due to unused option or function result being undefined.

The current (0.15.1) verson of Numba does not understand the oft used phrase:

if x is not None:
    print('great work')

It gives the error:
numba.lowering.LoweringError: Failed at object mode backend
Internal error:
ValueError: 'is not' is not in list

You also can’t just say
if not x is None
You’ll get the same error. If you used the dis package maybe you’d see they evaluate the same way.

Another disallowed phrase is raise RuntimeError()
just use exit instead.

Instead I have to use NaN:

if not numpy.isnan(x):
    print('great work')

numba.bytecode.ByteCodeSupportError: does not support cellvars
can occur if you use default value e.g.
def blah(x,y=3)
or if you use nested functions.

you can see examples of this at:
https://github.com/scienceopen/numba-examples

Intel Edison vs. Raspberry Pi: OpenCV2

Using a proprietary algorithm for static image analysis on images, and running the algorithm repeatedly,

the Intel Edison is about 2.35 times FASTER than the Raspberry Pi, using only ONE of the TWO Intel Edison cores

Measured power consumption of Intel Edison

Using a multimeter and powering via J21 with a 9 Volt battery I measured:
booting up (peak): 984mW (8.2V * 120mA)
using Wifi 5GHz band (opkg update): 680mW (8.5V * 80mA)
typing text (using serial port): 346mW (8.65V * 40mA)
idle (serial port sleeps after a few seconds): 88mW (8.8V * 10mA)
powered off (LED is on adapter board): 45mW (8.9V * 5mA)

I don’t recommend powering the Edison with a 9-Volt battery, and my measurements were pretty casual. I was doing them quickly as there seems to be scarce measurements out there of the Intel Edison power draw. I did find someone else recently did benchmarks of Raspberry Pi vs. Beaglebone Black vs. Intel Edison.

I tried unplugging all USB and idle was still approx. 10mA.

The spec sheet shows the idle power with Wifi as 35mW, while my measurement is 88mW. I think likely sources for my “high” power reading are the two bright green LEDs on the USB adapter board, and the switching power conversion from 9V to 1.8V. I would expect the discrepancy to be less than observed, but my observation/measurement method may be faulty as well.

The fact remains that the Edison draws far less power at idle, perhaps 1/20 the power of the Raspberry Pi Model B at idle.

Installing Python Pip on Intel Edison

Note: The current Yocto images only leave a few hundred MB under /
while giving a couple GB free under /home. Be careful not to fill up /
I may remap the Python libraries to /home.

Assuming you’ve already added the unofficial repository, I did the following:

opkg install python-pip


cd
curl -o get-pip.py https://bootstrap.pypa.io/get-pip.py
python get-pip.py

and now you’re ready for the easy install of pip modules.

Getting started with Intel Edison

These directions are for the non-Arduino board, available from Mouser, Amazon, Newegg, etc. as model EDI1BB.AL.K

These directions assume you’re running Ubuntu Linux 14.04 on your laptop PC, and that you have at least a beginner level knowledge of working with embedded/single board systems such as the Beaglebone Black or Raspberry Pi.

Before you start, you should be sure your user is in the “dialout” group on your laptop by typing in Terminal
sudo adduser yourusername dialout
You will need to logout and login on your laptop (reboot not required) to make this take effect.

This Edison board includes two USB ports, and you will plug into the Edison with two USB to micro-USB cables (not included with Edison).  I used the ubiquitous Micro USB Type B.

The microUSB cable connecting to J16 (the OTG port) provides power to the Edison, and mounts an 805MB FAT32 partition named Edison.
The microUSB cable connecting to J3 connects to an internal serial to USB convertor.

You can watch the messages that pop up on your laptop by typing the
dmesg
command. Avoid unplugging the Edison while it’s running — it’s a computer, and I don’t know if the flash drive on it could be corrupted (same applies to Beaglebone, Raspi, etc.)

On my laptop, the Edison came up on /dev/ttyUSB0 when both J16 and J3 were connected to my laptop. I typed on my laptop
screen /dev/ttyUSB0 115200

Press Enter and you’ll get a login prompt. Type
root
and there’s no password (unless you set one previously)

Software Update

Let’s update with the latest image.

You want Edison Yocto complete image files

Unzip this a directory ON YOUR LAPTOP. Then on your Linux laptop, type

sudo apt-get install dfu-util
sudo bash flashall.sh 

from that directory
Note this command erases everything on the Edison including configuration settings

If you get a message like otaupdate.scr not found upon rebooting, the update is probably not going through.  The process will take about five minutes (much longer than a normal reboot).

You can confirm the proper version is uploaded by typing in the Edison

cat /etc/version

the current version (as I type this in Nov 2014) firmware is edison-rel1-main-weekly_build_16_2014-10-14…..

Edison Configuration

I type
configure_edison --setup
The Edison can connect to WPA2 Enterprise as well as typical home WPA2 access points.

Update Intel Repositories

First I got the latest IoT Developer Kit libraries by typing
echo "src intel-iotdk http://iotdk.intel.com/repos/1.1/intelgalactic" > /etc/opkg/intel-iotdk.conf
opkg update
opkg upgrade

Add Unofficial Repository (at your own risk)

vi /etc/opkg/base-feeds.conf
and paste in

src/gz all http://repo.opkg.net/edison/repo/all
src/gz edison http://repo.opkg.net/edison/repo/edison
src/gz core2-32 http://repo.opkg.net/edison/repo/core2-32

Then hit your Escape key and type

:wq

to exit and save the file ( you forgot how to use vi, didn’t you)

Now I have access to many precompiled programs. The core2-32 directory currently holds the ones you might recognize.
For example:

opkg update
opkg install nano

and so on

Sparse Matrices in Python from Matlab R2014b

First of all, you can’t pass sparse matrices, so you have to have enough RAM to hold the full matrix and probably a copy or two of it. This is more just to show how it could be done, and hope that the Mathworks will improve the passing of variables in future releases of Matlab.

All commands are issued in Matlab R2014b.

a = eye(5);
A = py.numpy.reshape(a(:)',size(a));
As = py.scipy.sparse.csc_matrix(A)

As =
Python csc_matrix with properties:

dtype: [1x1 py.numpy.dtype]
has_sorted_indices: 1
nnz: 5
shape: [1x1 py.tuple]
maxprint: 50
indices: [1x1 py.numpy.ndarray]
data: [1x1 py.numpy.ndarray]
indptr: [1x1 py.numpy.ndarray]
format: [1x1 py.str]
(0, 0) 1.0
(1, 1) 1.0
(2, 2) 1.0
(3, 3) 1.0
(4, 4) 1.0

Matlab R2014b: passing matrices to/from Python

As noted in my earlier post, this is awkward because Matlab doesn’t understand Numpy arrays. Matlab understands lists, dicts, sets, scalars, and other less frequently used classes from Python. Let’s do an example with the “clown” image included with Matlab. All commands here are executed in Matlab R2014b.

First off, here are some Python packages that don’t currently work from Matlab R2014b (just hard crash Matlab)
scikit-image
cv2 (opencv 2.4)

NOTE: The use of the ‘F’ parameter to and from Python in the .reshape() and .ravel() methods–this is crucial or your matrix will be transposed inside Python!


load clown % 200x320 image is now in variable X
Xp = py.numpy.reshape(X(:)',size(X),'F'); % I ravel X to a row vector, and unravel with Numpy
Yp = py.scipy.ndimage.gaussian_filter(Xp,3); % SciPy works, but Scikit-image doesn't for me
% now let's come back to Matlab
Y = reshape(cell2mat(cell(Yp.ravel('F').tolist())),size(X)); % a regular Matlab 2-D matrix
imshow(Y,map) %map comes from when you load clown

% now let's do something similar in Matlab--note I didn't make the filter truncation radius the same, so the numerical results differ.
F = fspecial('gaussian',[15,15],3);
M = imfilter(X,F);
imshow(M,map)

Of course normally you would be using Python for a function not readily available in Matlab, but this was a side-by-side working example.