Vision Sensor Simulator (VSS)
Last updated
Last updated
The Vision Sensor Simulator is a Jupyter Notebook app made to help you code for the vision sensor, but using your own webcam. Using the Vision Utility that VEX gives you, you can only see what the vision sensor is detecting in real time if code is not running on the bot. Either you look at the vision sensors input, or you look at its output. Debugging like this tends to devolve into just compiling and uploading for hours trying to figure out the problem, especially on a sensor as noisy as the vision sensor.
With the VSS, you can code in all the logic necessary for vision sensor processing using an API similar to that of PROS. Instead of using the "VEX Signatures" that have unknown identity, the VSS uses HSV (Hue, Saturation, Value) as its signature, making tuning a lot simpler. And since it uses your webcam, you can test your code without the bot on hand.
With the VSS, you can do the following:
Have up to 7 distinct signatures detected with your own webcam
Pause the output video stream
Save signatures to the notebook to save yourself time
Use the VSS API written with OpenCV to do blob detection similar to that of the vision sensor
To install, head to the UvuvLib Github Repo and download UvuvLibVSS.ipynb. It'll be in the main directory.
After doing this, use your IDE of choice and install Python and Jupyter Notebook. Resources for both are below.
NOTE: Depending on the IDE, Python may come pre-installed. To check if Python is already installed on your computer, use python --version
inside your computer's command prompt.
After both of these are installed
Once the Jupyter Notebook has been opened, you'll be greeted with a program like below.
"vid" is your webcam. In OpenCV, the webcam used in the webcam feed is defined by the number inside the constructor of VideoCapture(). For example, if I want to use my default camera, I'll use 0. If I have a different webcam that I'd like to use, I'd use 1, 2, etc. depending on how many webcams are plugged in. It's recommended to do some reading on this page below to understand what you'll need here.
"bool_read" is a variable that says whether or not the webcam feed was read properly. If it's true, it shouldn't have any problems later on. "image" is a throwaway variable that isn't used.
"setup()" (obviously) sets up the VSS to be used by you. It makes all the trackbars and UI elements and also loads (if any) saved signatures off your machine. Very necessary so do not take it out.
Then there's the loop. Though it's also mentioned in the API section, the stepVSS() returns the array of every single object that the VSS sees. That is then used as an input for other functions. Saving the stepVSS() output to a variable is key for the loop.
Lastly, you need to check on your own if there is no objects inside the array. If you fail to have error handling, the program will crash constantly, and that is not the VSS' fault.
Running the code there should yield your face and a bunch of trackbars being displayed.
If you don't see the windows pop up, check your taskbar. It can be minimized until you click on the windows in the taskbar for some machines.
The API has only 6 functions:
stepVSS()
getBySize()
getByHSV_Sig()
getObjCnt()
getCameraWidth()
getCameraHeight()
With those 5 functions, you can have the same functionality as real vision sensor API's.