We’re starting the Justin Anderson case as a challenge to measure distances with Factory IO. Since we don’t have sensors of this type, we’ll use an image received from a camera.
As always, this is another contribution to the Factory IO project, dedicated to students who enjoy analyzing, learning, and understanding programming from the inside out.
Furthermore, since no practical example has been presented for this scenario, I’ll create one myself so that students and anyone else interested only have to follow my steps. This will make it much simpler and more enjoyable. You can follow the steps on my channel.
While you can follow the steps on my channel, here are the first two parts, which include the foundation we’re starting from, from trigonometric calculations to obtain distances, to converting the hypotenuse into a linear measurement—a unique research and development project that would be impossible to execute without Factory IO.
Thanks a ton for the effort and for sharing this, always love seeing fresh content.
That said, speaking from the real world (I commission full sorters and distribution centers for a living): using a camera + trigonometry just to measure distances or positions is super cool as an academic exercise and to explore Factory I/O vision capabilities, but in a real plant no project manager would ever approve it .
In actual projects we solve this kind of measurement 100 % of the time with basic tracking:
belt encoder (speed)
one photo-eye at the entrance to capture the leading edge
one photo-eye at the exit (or any reference point)
That’s it → perfect position tracking, almost zero cost, and rock-solid reliability.
Vision cameras are only justified for very specific cases (random orientation, code reading on damaged labels, complex shapes, etc.). For simple distance/position measurement it’s pure over-engineering and a maintenance headache.
Your series looks awesome for learning and pushing the tool, but I’d just add a small note somewhere (title/description):
“This is an educational approach using vision because Factory I/O doesn’t have certain sensors yet. In real industry we do it with simple tracking and two photo-eyes.”
That way students see both the creative solution and the real-world standard.
Anyway, I’m definitely watching the whole thing. Great work man!
It’s not Justin who’s doing the work, but thanks for commenting nonetheless.
That said, I should mention that what you’re seeing isn’t vision, it’s image processing. A camera sends an image, and the machine acts based on the coordinates and pixel color. This fourth video shows how the machine works, specifically line 1, since line 2 isn’t executed until it’s working correctly:
You’re right about computer vision; it’s used in other applications (there are some on my channel too). In fact, I can do the same thing with computer vision, but it would be much more complex due to the libraries involved. This way, I think it’s more feasible for students who want to try it, since it doesn’t require libraries or anything complex if they’ve never used it before.
If everything went well with the trigonometry, this should be the result.
In the next contribution, we’ll use real computer vision to measure all the parameters of the objects in the image, including the distance between them.
If anyone has another idea, for example, regarding distances, please share it here and we’ll analyze the case. We’ll see what can be done with Factory IO and what can’t be done so far.