One Hour With: Raspberry Pi NOIR Camera


Over the Christmas period I had a chunk of time off, and so of course I filled it with many more things I’d like to do than I really had the time or energy for. One of these was to play with one of my Christmas gifts, a NoIR camera for my Raspberry Pi.

image-asset-3357509

I hit upon the idea of setting myself a time limit, to see how far I could get in just one hour. This may inspire a series of similar posts to see how far I can get with a project in one hour.

Starting conditions

My hour did not start from an entirely blank slate. I’ve had the Raspberry Pi for a while now, so I had a power supply, an HDMI lead, and SD card. I’d already set up Raspbian and openELEC on the SD card.

Objective

I’ve got two Cotier cameras set up acting as CCTV at the front and back of the house; my objective was to get the Pi acting as a third camera in the mix.

Results

I didn’t make it in one hour. But I did make it shortly after. At the hour mark I think I had the camera ‘working’ and using motion to detect movement, but I wasn’t able to access the stream for some reason. Which I ‘fixed’ by switching from the recommended MD5 authentication for the stream down to basic auth.

What I achieved in the hour

In the hour I did achieve a bunch, not all of which was really related to my objective.

I attached the camera module, there are two places that look like they could take it, and neither is completely obvious which way you’d install the ribbon, but a quick Google pointed me at it – so this was very quick. I connected everything up to a monitor and keyboard  and booted up.

I found some instructions specifically for setting up the Pi with motion for the camera, including a custom build of motion that supports the Pi’s camera module. However, the start of those instructions (wisely) has you do an update for the rpi software and a Raspbian update/upgrade.

This took a huge chunk of my hour before I even got started. I could have excluded this from the timer, but it is worth remembering that this sort of thing does need to get done, and can take a big chunk of an hour. Had I been starting from a fresh Raspberry Pi, then I would have killed a similar amount of time on a first install of Raspbian and other software.

After the update, I followed the instructions to download and install the custom motion build. I didn’t download their config, instead deciding to go through the motion.conf myself and look for the settings they talked about along with others. Here I have the advantage that I’ve already got two cameras up and running using motion so I’m familiar with it. One thing I have running on my other setup is a NotifyMyAndroid script which sends my phone a message on events. I copied this across and set it up. I also wanted all the output of all cameras to end up in the same folder to be synced with Dropbox. My existing two cameras are both controlled by motion on a different server so the existing folder was local to that. I decided to move everything to a shared NAS* drive, and so I included the reconfigure of the existing system to point to the new location.

*(It later turns out that Dropbox sync apparently only works where it is intercepting create file events , or similar, so files created in the folder by my Raspberry Pi are invisible to the sync process and it does not push them out… This is a work in progress!)

I spent a while faffing around trying to figure out why the stream wasn’t working when I pointed to it. It was prompting me for credentials, but not accepting the ones I gave. I tried Chrome and VLC with no joy. However, I did notice that I was getting pictures and video stored in my videos folder.

After the hour was up

I was so close when the alarm went off that I just kept going. A few minutes later I had switched authentication back to basic auth and the stream worked fine. This lead me into tweaking the config to keep the CPU usage under control. I’m still not sure whether ‘locating movement’ by drawing boxes into the frame is CPU intensive or not. I’ve tried it on and off. With it on I was clearly skipping frames, so we’ll see how it performs with it off. I also tried for 4 frames per second, my other two cameras are set at 5 fps. I’ve bumped that back down to 3 fps.

I stopped it taking pictures, and just have it taking video, as I don’t really need the images and I guess that would also take more processing. I added a new firewall rule for my router and added the camera feed as a third one to IP Camera Viewer on my phone. I set up the camera by our front door. The idea is that the existing two are wide ground shots, with the front door camera we should get a better view of anyone that comes to the door. Not sure if that is where I’ll leave it, ultimately I want to add wifi module to the Pi and make it more portable to put anywhere. That said, another crazy scheme is to mount it on my pan and tilt mechanism and see if I can have the Arduino power the Pi, and have the Pi send control signals to the Arduino. This in theory would allow motion to send tracking signals as well which would be fun to try.

Well into hour three or four of playing around I had decided to try and make the whole setup more robust. My existing system relies on VLC to transcode the IP camera’s h264 stream into something that motion supports. Then some scripts to allow motion to trigger more VLC sessions to record hi-res streams when events happen. This has been very flaky as the VLC sessions tend to terminate after certain amount of failures, also it was all kicked off by me in terminal sessions so any time we rebooted I’d have to remember to go set things going again.

I dug into more VLC setup and found a way to start VLC as a daemon service using init.d and start-stop-service, and passed in my setup of two channels as a single vlm.conf file. This meant running just one process that kicks off both channels; it means a nicer start/stop mechanism for the webcam transcoding. This also meant that I realised motion has an event fired when it loses connection to a camera and I could use that to trigger it to force a restart of the webcam service and hopefully allow it to ‘self-heal’ when things go wrong. We shall see whether this increases reliability.

The remaining two problems I know of are:

  • Dropbox sync. I need to come up with some way to make Dropbox aware of the new files placed by the Pi. My current hope is that a local user ‘touch’ on the files might work. If so, I might have an event triggered that executes a remote command on the primary Dropbox machine to touch all the files. If that doesn’t work… well I guess I’ll just have to think of something else.
    UPDATE: This worked, a remote touch of the files got Dropbox to notice them.
  • The second problem which I’ll try to investigate is that I often use a private VPN on our primary server, but after periods of inactivity it seems to stop working, requiring a manual stop/start to behave. This means that after some time the system stops seeing the outside world at all and the sync fails. So I need to come up with something that will keep the connection more reliable.

In all my ‘I’ll just do this for an hour’ snowballed into much of the day spent tweaking my setup. However, I justified this as starting the new year with things ‘working’. It also spun off into my finally upgrading XBMC to KODI and getting the remote app on my phone working again. 🙂