Beaumark BM888 Pressure Cooker

A non technical post (well, non Python or Linux related, although pressure cookers are quite technical, even if they don’t run NetBSD).

Having bought a Beaumark BM888 pressure cooker recently, the ‘manual’ is a poorly translated, pretty useless little booklet (although very funny!). If you’re a complete novice to pressure cookers (as I am) this doesn’t make life easy for you. So, to try and fill in the gaps that the manual is missing:

  1. The smaller plastic container is the steam condensation collector. This slips into place on the back of the unit. If you remove the lid, you’ll see a hole at the back on top – it fits under there.
  2. The purple/ red pin on top is the pressure indicator – when the unit is pressurised, this will come up. After you’ve been cooking, this must go down to indicate the pressure has dropped before you can open the lid.
  3. The black twist thing is the pressure release valve, and it goes on the lid. When the lid is on, this should be pointing in the opposite direction of the pressure indicator (i.e. it should not point to the back of the unit). After you have finished cooking, to quick release (not wait 10-15 minutes for the unit to cool down naturally), you need to turn this. DO NOT DO THIS BY HAND – use a spoon or something with some reach, as this will start spitting out hot steam until the pressure has dropped (check the pressure indicator to know when it has).
  4. Lots of recipes require you to fry/ sear some things first. That’s absolutely fine – put the unit in Hot Pot mode without the lid on, and you can stick some oil in and do that (e.g. if you are doing a beef stew, you need to sear the meat first and fry the onions).
  5. It doesn’t come with a steamer basket – you will need to buy one separately if you need one.

The cooking times are not included in the box or the manual, they can be found here: http://seantvdirect.com/wp-content/uploads/2016/05/hotpot-quick-guide.doc

There are also some recipes on their website here: http://seantvdirect.com/wp-content/uploads/2017/02/cookbook-March-2017.pdf

I’ve figured this all out by:

  1. Reading the Amazon reviews where people have asked these questions.
  2. Watching the cooking videos with Kevin Dundon on their YouTube channel – https://www.youtube.com/watch?v=t4YfsVB4utg
  3. Pressure cooker basics website – http://www.thekitchn.com/10-things-you-need-to-know-before-using-an-electric-pressure-cooker-220034

(As I got this for only £29 from my local The Original Factory Shop, rather than the current price of £60 on Amazon, I cannot really complain too much).

Posted on October 3, 2017 at 9:55 pm by Carlos Corbacho · Permalink · Leave a comment
In: Misc

2016 – the year of Python 3

Given I’ve posted before on the issues with Python 2 and unicode, it’s worth noting that there is a simple solution – upgrade to Python 3.

The libraries have now finally reached a point where they are, by and large, dual stack compatible, or have unofficial forks to get you through (e.g. Fabric and Fabric3).

From experience of having done a few ports myself now (and getting comfortable with the changes in Python 3 – remembering to use next() instead .next() keeps tripping up my finger memory), a few tips to bear in mind:

  1. Go for Python 3.5 if at all possible (if you’re using Ubuntu as your base OS, that means 16.04. If you’re using Red Hat… good luck). Try and start aligning your base OS choice now – you don’t want to combine doing an OS upgrade and language upgrade if you can avoid it.
  2. Don’t make your code base compatible with both 2.x and 3.x unless you absolutely have to – this is a lot of overhead. Unless you’re a library maintainer, or you have a library shared between multiple pieces of code, just go for a clean switch.
  3. If you do have libraries that have to work on both – six is your friend. The code will not look pretty, but they’ve pretty much thought of and dealt with all the cross version issues you need to deal with.
  4. Get into good habits now. As you are going to be porting that 2.7 code at some point. Try and do things in a Python 3 way where possible, and use something like Pylint to tell you when you are not. e.g. exceptions are instances, not classes. Look at sprinkling some from __future__ into your Python 2 code to turn on Python 3 features and fix what breaks.
  5. 2to3 is a good starting point, but not perfect. You will need to review its output. In some cases it gets confused with relative imports (e.g. if you have a global module and a local one, it can try and rename the global reference – e.g. having a module named celery.py). Also, it’s overly conservative when dealing with the changes to dicts returning generators versus lists, and will try to co-erce every .keys() to list(.keys()) – you probably don’t need that list() call in there.
  6. Unit tests – you do have them, right?
  7. Be brave – you need to bite the bullet and do this. There’s never going to be a good time, so think of it as part of the next big refactoring you do.
Posted on November 21, 2016 at 11:24 pm by Carlos Corbacho · Permalink · Leave a comment
In: Python

Using Selenium with PhantomJS

If you don’t want to play around with things like CasperJS to do your simple browser automation, it turns out that PhantomJS has WebDriver support, and in turn, Selenium supports it.

To use it via the Python bindings, you’ll need to:

1) Install PhantomJS somewhere
2) Install Selenium from PyPi (or install it with Pip)
3) Run:

from selenium import webdriver

driver = webdriver.PhantomJS(phantom_path)

(phantom_path is the path to the phantomjs binary).

And that’s it! Selenium will spawn a PhantomJS process that it communicates to over WebDriver and you can use the Selenium API to do your automation. You therefore don’t need Selenium Server either.

(CasperJS seems to be faster at driving PhantomJS than Selenium, based on anecdotal evidence of a single Google search, but it’s always nice to have options).

Posted on April 18, 2014 at 8:28 pm by Carlos Corbacho · Permalink · Leave a comment
In: Python, Web

Running Clojure on Slackware

To run Clojure on Slackware you’ll want a few things:

  1. JDK (use the Slackbuild from /extra to build the latest version yourself)
  2. Leiningen (available from SlackBuilds.org)
  3. Create a command line wrapper to execute Clojure scripts (some examples for learning Clojure expect you to set this up, but don’t really tell you how to).

Assuming you’ve installed Leiningen (in my case, 1.6.1.1), I’ve created ~/bin/clj with the following to do this:

#!/bin/sh
java -cp ~/.m2/repository/org/clojure/clojure/1.2.1/clojure-1.2.1.jar clojure.main "$@"

Posted on January 4, 2014 at 1:59 pm by Carlos Corbacho · Permalink · Comments Closed
In: Clojure, Linux, Slackware

Varnish and Ajax

If you have endpoints on your site that serve up both HTML and JSON depending on the request type, not the URL, then you need to tell Varnish to add this as an extra hash so that it doesn’t return JSON to clients expecting HTML, and vice versa. Oddly, I’ve not actually seen any examples of how to this, so I came up with:

sub vcl_hash {
    if (req.http.X-Requested-With == "XMLHttpRequest") {
        hash_data(req.http.X-Requested-With);
    }
}

(Django uses X-Requested-With for is_ajax(), so it is consistent with that – if you were wondering).

Posted on August 9, 2013 at 11:03 pm by Carlos Corbacho · Permalink · Leave a comment
In: Apache, Django, Varnish, Web

Linux video acceleration

With the advent of open source UVD support for my HD5750 in Linux, I’ve been trying to understand what is needed for accelerating video under Linux, and how all the components work together. So I drew a diagram to try and make sense of it all:

Linux video acceleration

Sufficed to say, it’s not pretty. The key thing is that there are currently two main modern acceleration API’s under Linux – Video Acceleration API (VA-API) and Video Decode and Presentation API for Unix (VDPAU). They don’t support all video formats, just some. Not all video drivers support them all (open source Intel and closed source AMD support VA-API, closed source nVidia and open source AMD supports VDPAU), and the support for each of these varies wildly from one piece of software to the next – for example, VLC uses VA-API, the closed source Flash player uses VDPAU.

If you’re using a driver that only supports VDPAU, you have a bit more luck, because VA-API currently exposes just a subset of the VDPAU functionality (though this will probably change in the upcoming VA-API release). A wrapper driver has therefore been written that can convert from VA-API to VDPAU, so VA-API software can still be accelerated on VDPAU only drivers. As far as I understand though, the reverse is not true of VA-API drivers and VDAPU software.

Posted on June 24, 2013 at 9:49 pm by Carlos Corbacho · Permalink · Leave a comment
In: Linux

Psycopg2 and large result sets

Psycopg2 has a bit of a gotcha when it comes to fetching result sets that can catch out the unsuspecting developer. By default, it does not actually use cursors, but simply emulate them. In practical terms, this means that the entire result set of your query is fetched by the client into memory.

It is documented these days, but buried quite far down if you’re not looking for it:

http://initd.org/psycopg/docs/usage.html#server-side-cursors

Practically speaking though, what does this actually mean? From a DB API point of view, there is no difference memory wise between:

.fetchone()

and

.fetchall()

The entire result set has already been fetched into memory, all you are doing is controlling how much of that you read into Python at one time.

By and large, it’s not actually a bad thing, as long as you don’t execute queries that return huge result sets. As you generally don’t need to do that, the key thing to be aware of in your client code is to write your code in such a way that you do as much filtering as possible at the SQL layer so you can return as small a result set as possible.

As an example, consider something like this:

cursor = connection.execute("SELECT * from cars")
for row in cursor:
    if row[1] == "blue":
        return row

Using an ORM such as the Django one, the equivalent would look something like:

for car in Car.objects.all():
    if car.colour == "blue":
        return car

In the above, we’re trying to find the first car that is blue. (It’s rather contrived that we’re calling .all(), but you could also imagine some other filter that returns a large number of car records). Now, let’s say that our ‘Car’ table has 20,000 cars in it. In both cases, it naively appears that we’re only reading in one record at a time, but this is not quite the case.

As soon as we executed the query, Psycopg2 loaded the entire result set, which in our case is the entire table, into memory. In the Django example, the only saving grace is that we are lazily creating the Car objects from the row, but that’s it – the entire result set is still in memory!

Whilst you could use named cursors (even with things like Django there are various ways to force PostgreSQL to use them), it’s generally not necessary. Simply try to do as much of your filtering as possible in SQL to keep the size of the result set small, rather than filtering in your Python code.

Posted on April 15, 2013 at 11:02 pm by Carlos Corbacho · Permalink · Leave a comment
In: Linux, Python

Linux Containers (LXC), libvirt and Slackware

I’ve spent the last few days getting very frustrated with trying to make a Linux Container (LXC) run via libvirt on Slackware – various weird and wonderful error messages about being unable to mount cgroups (LXC depends on cgroups to provide the networking and namespace isolation).

The short answer is that by default, Slackware mounts a cgroup type filesystem onto /sys/fs/cgroup, which causes every cgroup to be mounted into that directory. Libvirt does not like this for LXC – it expects each different cgroup type to be mounted in a separate directory. I’ve therefore put together the following init script that remounts the cgroups into the format that Libvirt expects (which I’ve based off the contents of fstab for another machine running Ubuntu):

#!/bin/sh
#
# /etc/rc.d/rc.cgroup: Cgroup mounting script
#
# Remount cgroups under a tmpfs directory in /sys. By default,
# Slackware mounts /sys/fs/cgroup - however, this does not work
# with using libvirt for Linux Containers, because it expects each
# cgroup to have its own directory. So, let's do that.

# Unmount the existing /sys/fs/cgroup
umount /sys/fs/cgroup

# Create a tmpfs structure to hold all the new mounts
mount -t tmpfs -o mode=755,noatime tmpfs /sys/fs/cgroup

for cgroup in cpu cpuset cpuacct memory devices freezer blkio perf_event; do
    mkdir /sys/fs/cgroup/$cgroup
    mount -t cgroup -o $cgroup,noatime cgroup /sys/fs/cgroup/$cgroup
done

To use this, add it to rc.local before rc.libvirt is called (as libvirt needs to use the cgroups).

Posted on October 18, 2012 at 10:09 pm by Carlos Corbacho · Permalink · Leave a comment
In: Linux, Slackware

Slackware, Amarok and Transcoding

As I’ve recently been importing CDs in as FLAC, I want something that can transcode this for me when and manage my collection when working with my iPod. Amarok has now finally re-gained transcoding support as of 2.4:

However, trying this in Slackware 14.0 RC3, it doesn’t work – trying to copy FLAC tracks to my iPod failed with an error message telling me the format is unsupported. A little bit of digging, and it turns out that to get transcoding support, you need to have ffmpeg installed when you build Amarok.

So to get it working, you’ll need to install ffmpeg and the relevant dependencies (SlackBuilds.org has this), then rebuild Amarok with ffmpeg installed. Your new, shiny Amarok will then offer transcoding as an option the next time you want to copy FLAC to your iPod.

Posted on August 27, 2012 at 11:09 am by Carlos Corbacho · Permalink · Leave a comment
In: KDE, Linux, Slackware

USB 3G Modem on Slackware – DNS

A follow up from last years post – clearly having not played around with the 3G card since last year, it’s only today I realised that DNS wasn’t actually working. PPPD was correctly requesting the nameservers from the remote peer, but by default, puts them into /etc/ppp/resolv.conf. This isn’t terribly helpful.

PPPD will try to call out to /etc/ppp/ip-{up,down} when bringing a connection up or d’own. By creating these and making them executable, we can get them to set up DNS for us.

I’ve therefore created them as follows:

/etc/ppp/ip-up:

#!/bin/sh

# Change DNS resolvers
if [ -f /etc/resolv.bak ]; then
  echo "/etc/resolv.bak exists!"
else
  cp /etc/resolv.conf /etc/resolv.bak
  rm /etc/resolv.conf
  ln -sf /etc/ppp/resolv.conf /etc/resolv.conf
fi

/etc/ppp/ip-down:

#!/bin/sh

# Change DNS resolvers back.
if [ -f /etc/resolv.bak ]; then
  rm /etc/resolv.conf
  mv /etc/resolv.bak /etc/resolv.conf
else
  echo "/etc/resolv.bak missing!"
fi

With this, running wvdial now also takes care of DNS properly and I can actually resolve domain names (you can also extend these scripts if you want things like VPN, etc, but in my case, this is more than sufficient).

Posted on August 24, 2012 at 11:02 pm by Carlos Corbacho · Permalink · Leave a comment
In: Linux, Slackware