Im looking for a driver for linux (I prefer an ubuntu one),that can allows me to use my kinect 2 as a webcam on obs and skype. I would like that the kinect 2 camera is recognized by obs and skype. I would like to know if your tool can offer this feature. If yes,how to do that ? can u suggest some documentation ? thanks.
Hello Ubuntu lovers, I have compiled and loaded the kernel driver module "gspca-kinect2" on the github below because I wanted that my kinect 2 was detected as webcam for skype,but it did not work. The commands that I have issued are explained here...
If you look at the /etc/systemd/system/v4l2-kinect.service systemd service defintion file, you will find the ffmpeg command that gets exectued at system boot or service start. That command takes input from the kernel and converts it into a format suitable for more other clients and feeds it into one end of the video loopback device.
So you can see from -i /dev/video0 that ffmpeg tries to read from /dev/video0, which is apparently something else on your systm. Maybe your machine already has a builtin webcam as /dev/video0? Anyway, you need to change that to -i /dev/video1 to feed the Kinect camera data to ffmpeg. Then you see that ffmpeg tries to feed the video which was converted from MotionJpeg to YUYV422 into the video4linux loopback device at /dev/video10 - also setup by the installation script and configured in /etc/modprobe.d/v4l2loopback.conf. If you edit the files so ffmpeg gets input from the Kinect and feeds converted input into the loopback device, and reboot the machine for good measure, then there should be another device, probably /dev/video10 that obs/skype etc. may be able to read. The setup script assumed the real Kinect color camera on video0, but that seems to be not the case on your setup, hence one reason for failure.
Open3D provides Python and C++ example code of Open3D Azure Kinect MKV Reader.Please see examples/cpp/AzureKinectMKVReader.cpp andexamples/python/reconstruction_system/sensors/azure_kinect_mkv_reader.pyfor details.
If you see the error message [ERROR] [DepthRegistrationOpenCL::init] Build Log: stringInput.cl:190:31: error: call to 'sqrt' is ambiguous. To fix this, in kinect2_bridge.launch, update depth_method to opengl(Or the one works for you), and reg_method to cpu(reference: -kinect-v2-point-clouds-with-ros-in-arch-linux/)
When everything is setup you can just fly in the picture by keeping the Alt 1 button pressed and release it to let the kinect take control. The setpoint GUI permits to change the setpoints and to control some pre-programmed patterns.
Using my ros-jade-kinect2 AUR package, you can install all required dependencies, such as a ton of ROS packages, Point Cloud Library and libfreenect2, which are all available in the Arch User Repository.
Right click the deb file to open with Archive Manager, go into data.tar.gz find thelibdepthengine.so.2.0 file in /./usr/lib/x86_64-linux-gnu/ Drag the file to /Azure-Kinect-Sensor-SDK/build/binif you can not find the file, I have it here.
If you want an api to get images off your device, libfreenect (GitHub - OpenKinect/libfreenect: Drivers and libraries for the Xbox Kinect device on Windows, Linux, and OS X) is ROS version agnostic (assuming you are on linux or osx). But since you are asking about a ros2 version, I suppose you are looking for an implementation of a ros2 node that leverages an existing kinect v1 driver and publishes color and depth image topics? Some googling turned up this: GitHub - fadlio/kinect_ros2: C++ ROS2 driver for Kinect v1 (Xbox 360).It is not too stale, and at the very least, could be a starting point for you.Out of curiosity, what were you using in ros1?
I also had some issues running the examples, even when lsusb was showing the Kinect. They were mostly permissions-related. I noted the steps I needed to take on the NVidia Developer discussion forum here: -kinect-with-jetson-tk1/?offset=33#4569415 of course, this is mostly just some pointers to make sure and follow all of the steps on _Started 2b1af7f3a8