USB Camera
The ARTIK 530, 710, and 1020 development boards support a UVC-compatible USB camera. Once you plug your USB camera into the USB port, check for recognition of the camera by using the command below.
dmesg | grep uvc
[839.428875] [c1] uvcvideo: Found UVC 1.00 device< unnamed >(046d:0825) [839.523992] [c2] usbcore: registered new interface driver uvcvideo
The default USB camera device node should be /dev/video0
. If more than one USB camera is plugged in, they will be identified as /dev/video0
, /dev/video1
.
OpenCV
OpenCV® stands for Open Source Computer Vision. It provides C++, C, Python and Java® interfaces and supports Windows®, Linux®, Mac® OS, iOS® and Android®. In this programming guide, our sample code is implemented with OpenCV APIs.
Capture Still Image
Using Command Line
You can use the fswebcam
utility to capture a still image from the webcam. The captured image can be saved as a PNG or JPEG file. The fswebcam
package is not included in the default binary; install it by using this command.
Ubuntu:
apt install fswebcam
Fedora:
dnf install fswebcam
Usage is
fswebcam –r Resolution filename
for a USB camera. For example:
fswebcam –r 1280x720 image.jpg --- Opening /dev/video0... Trying source module v4l2... /dev/video0 opened. No input was specified, using the first. --- Capturing frame... Captured frame in 0.00 seconds. --- Processing captured image... Writing JPEG image to 'image.jpg'.
Using OpenCV C++ API
OpenCV supports C interfaces. However, one issue with the C interface is that developers have to manage memory manually. C++ support was intrduced from OpenCV 2.0 to help handle memory management and keep code concise.
To use OpenCV C++ APIs, install the OpenCV devel package and gcc compiler on your ARTIK board. The latter comes pre-installed with the latest versions of the ARTIK binary.
Ubuntu:
apt install libopencv-dev
Fedora:
dnf install opencv-devel
dnf install gcc-c++
An image is basically a matrix containing all the intensity values of the pixel points. Mat
is the C++ class in OpenCV that acts as the basic image container. Mat
has two parts: the matrix header (containing information such as the size of the matrix) and a pointer to the matrix containing the pixel values.
The VideoCapture
class is provided for video capturing from video files, images sequences or cameras. For example:
VideoCapture cap(0);
The imwrite
API saves an image to a specified file. Its API is:
bool imwrite(const string& filename, InputArray img, const vector< int > & params = vector< int >() )
Details on the VideoCapture
class and imwrite
in particular can be found here.
Here is our C++ code takephoto.cpp
:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 | #include "opencv2/opencv.hpp" using namespace cv; int main() { VideoCapture cap(100); //open the default camera if (!cap.isOpened()) //check if we succeed return -1; Mat frame; cap >> frame; //get a new frame from camera imwrite("image.jpg", frame); //the camera is deinitialized automatically in //VideoCapture destructor return 0; } |
You may need to run apt install g++
if g++
is not installed.
Then, compile the code with this command line.
g++ takephoto.cpp -o takephoto `pkg-config --cflags --libs opencv`
Run it with this command line.
./takephoto
A file named image.jpg
will be generated under the current directory.
Using OpenCV Python
You can also use OpenCV Python bindings to take a still image. First, you need to manually install the OpenCV Python package on your ARTIK board.
Ubuntu:
apt install python-opencv
Fedora:
dnf install opencv-python
Here is our Python code takephoto.py
. Its structure is the same as our C++ code.
1 2 3 4 5 6 7 8 9 | import cv2 cap = cv2.VideoCapture(0) filename = "image.jpg" #Captures a single image and returns it in PIL format ret, frame = cap.read() cv2.imwrite(filename,frame) cap.release() |
The Python code:
- Imports the OpenCV library
- Connects to the video camera by using VideoCapture class
- Captures a frame from the camera with its read() function
–VideoCapture::read()
grabs, decodes, and returns the next video frame - Saves the captured frame into a file
- Releases control over the camera.
Run the Python code from command line.
python takephoto.py
An image named image.jpg will be generated under the current directory.
Record Video
Using Command Line
ffmpeg
is widely used to encode and decode video streams from a device or file. It also supports grabbing input streams from V4L2(Video4Linux2) devices. To install the ffmpeg
package, use this command.
Ubuntu:
apt install ffmpeg
Fedora:
dnf install ffmpeg
To record a video clip, you can use a command line like the one in this example.
ffmpeg -f alsa -ac 2 -i hw:0 -f v4l2 -s 640x480 -i /dev/video0 -t 20 video.mpg -f alsa Use ALSA for audio input -ac 2 Audio channel is Stereo (1 for Mono, 2 for Stereo) -i hw:0 Input Audio Card is 0 -f v4l2 Use V4L2 capture device -s 640x480 Resolution is 640 x 480
The input audio card number can be found from arecord
. In the example above, we used the built-in microphone.
arecord -l
**** List of CAPTURE Hardware Devices **** card 0: artikak4953 [artik-ak4953], device 0: Playback ak4953-AIF1-0 [] Subdevices: 1/1 Subdevice #0: subdevice #0 card 1: U0x46d0x825 [USB Device 0x46d:0x825], device 0: USB Audio [USB Audio] Subdevices: 1/1 Subdevice #0: subdevice #0
Using OpenCV C++ APIs
You can also use OpenCV APIs to record video from a camera. For simple video outputs, we can use the OpenCV built-in VideoWriter
class.
Note that OpenCV is mainly a computer vision library, and therefore only supports the .avi extension. The video file size is limited to 2GB. You can only create and expand a single video track inside the container; no audio or other track editing is supported.
Install gstreamer
packages:
Ubuntu:
apt install gstreamer*
Fedora:
dnf install gstreamer*
Here is the C++ code videocapture.cpp
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 | #include "opencv2/highgui/highgui.hpp" #include <iostream> using namespace cv; using namespace std; int main(int argc, char* argv[]) { VideoCapture cap(0); // open the video camera 0 if (!cap.isOpened()) { return -1; } //get the width of the video frame double dWidth = cap.get(CV_CAP_PROP_FRAME_WIDTH); //get the height of the video frame double dHeight = cap.get(CV_CAP_PROP_FRAME_HEIGHT); cout << "Frame Size = " << dWidth << "x" << dHeight << endl; Size frameSize(static_cast<int>(dWidth), static_cast<int>(dHeight)); //Initialize the VideoWriter object VideoWriter videoWriter("myvideo.avi", CV_FOURCC('P','I','M','1'), 20, frameSize, true); //If VideoWriter is not initialized successfully, exit the program if (!videoWriter.isOpened()) { cout << "ERROR: Failed to write the video" << endl; return -1; } while (1) { Mat frame; bool bSuccess = cap.read(frame); // read a new frame from video if (!bSuccess) { cout << "ERROR: Cannot read a frame from video file" << endl; break; } //write the frame into the file videoWriter.write(frame); } return 0; } |
Compile it with this command line.
g++ videocapture.cpp -o videocapture `pkg-config --cflags --libs opencv`
Run it with this command line.
./videocapture
Using OpenCV Python
The Python code videocapture.p
y below records a 10-second video clip, and saves it into myvideo.avi
under the current directory.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 | # Import Libraries import numpy as np import cv2 # Define variables framerate = 20.0 # frames per second videolength = 10 # length of video in seconds # Grab Camera cap = cv2.VideoCapture(0) # Define the codec and create VideoWriter object fourcc = cv2.cv.CV_FOURCC(*'XVID') out = cv2.VideoWriter('myvideo.avi',fourcc, framerate, (640,480)) # Video part for i in range(int(videolength*framerate)): if (cap.isOpened()): ret, frame = cap.read() if ret==True: out.write(frame) else: continue # Release the camera and video file cap.release() out.release() |
Launching the code from the command line will capture myvideo.avi.
python videocapture.py
Video Streaming
Streaming video from a USB camera connected to ARTIK is possible using the protocols described here.
Streaming with ARTIK as RTSP Server
You can stream a video with RTSP protocol over the network by using ffmpeg
and ffserver
.
Supported resolutions and frame rates for video streaming are:
- VGA(640x480) : 30fps, 24fps, 15fps, 10fps and 5fps
- HD(1280x720): 15fps, 10fps, and 5fps
Set up the ARTIK board as follows.
- Configure
ffserver
with a server configuration file, such as thisffserver.conf
file for VGA 30fps streaming.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 | Port 8090 BindAddress 0.0.0.0 RTSPPort 8091 RTSPBindAddress 0.0.0.0 MaxClients 1000 MaxBandwidth 10000 CustomLog - <Feed feed1.ffm> File /tmp/feed1.ffm ACL Allow 127.0.0.1 </Feed> <Stream test1.mp4> Feed feed1.ffm Format rtp VideoFrameRate 30 VideoBitRate 5000 VideoSize 640x480 VideoQMin 3 VideoQMax 31 NoAudio NoDefaults </Stream> |
-
Launch
ffserver
in the background.
ffserver -f ffserver.conf &
-
Run a stream feed with
ffmpeg
on the ARTIK board.
ffmpeg -f v4l2 -s 640x480 -r 30 -i /dev/video0 http://localhost:8090/feed1.ffm
Receive and Play Streaming Content
These instructions are for receiving and playing the video stream on a PC, not on the ARTIK board.
You can use media players like mplayer
or VLC as the RTSP client to receive and play video streams.
Example using mplayer
mplayer rtsp://[IP Address]:8091/test1.mp4
Example using VLC Client
- Go to File-> Open Network -> Network
- Enter RTSP server IP address, port number and streaming content as set in the configuration file described previously
- Click “Open” to launch the streaming content.
Streaming with OpenCV Python and Flask
In this code snippet, we will use OpenCV and Flask to implement live video streaming.
Flask is a micro web framework that provides native support for streaming. More details about Flask can be found here.
Flask streaming uses a technique where the server provides a response to a request in chunks. During streaming, each chunk replaces the previous one on the page, so it produces an animated effect in the browser window. We use multipart response to implement in-place updates, where each chunk in the stream is an image.
Prerequisite: Install Flask framework.
pip install flask
Here is our Python code livestream.py
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 | from flask import Flask, Response import cv2 class Camera(object): def __init__(self): self.cap = cv2.VideoCapture(0) def get_frame(self): ret, frame = self.cap.read() cv2.imwrite('stream.jpg',frame) return open('stream.jpg', 'rb').read() app = Flask(__name__) def gen(camera): while True: frame = camera.get_frame() yield (b'--frame\r\n' b'Content-Type: image/jpeg\r\n\r\n' + frame + b'\r\n') @app.route('/') def video_feed(): return Response(gen(Camera()), mimetype='multipart/x-mixed-replace;boundary=frame') if __name__ == '__main__': app.run(host='0.0.0.0', debug=True) |
Run the code with this command line.
python livestream.py
* Running on http://0.0.0.0:5000/ (Press CTRL+C to quit) * Restarting with stat * Debugger is active! * Debugger pin code: 647-240-038
From your host machine, launch a web browser and visit