Let’s start a thread about what Justin Anderson said.
I’ve never tried alternative solutions to what you’re suggesting, but if I had to, I’d use two sensors: one that detects the 1-meter zone and another that detects the 3-meter zone. I think that’s much better and more reliable.
To be honest, I don’t understand anything you’re saying unless you’re more explicit and give an example of what you mean. Three meters is a reasonable distance for a sensor to accurately determine the distance with its signal. Or, if you want to detect that the part is within those 3 meters, as I’ve already mentioned, you could use two sensors, one for each zone.
If a practical example isn’t provided that uses the initial idea (i.e., the one shown in the image) to detect objects less than 1 meter away and more than 3 meters away, leaving the area between 1 and 3 meters undetected, then that sensor doesn’t seem very useful. At least, that’s my opinion. Again, I’m not criticizing it, just explaining my point of view, or analysis if you prefer.
In any case, I’ll analyze the presented distance scenario. I won’t do it with that type of detector because FACTORY IO doesn’t offer it, but I can do it with computer vision, although I don’t know how or where to use it. This is the second time I’ve done this here: finding the solution before finding the problem.
I think you’ve misunderstood something. Precisely between the minimum and maximum distances, IT’S NOT DETECTED. Hence the doubt, not a CRITICISM, but an analysis of a doubt regarding what this user is stating, who, by the way, has rated the image as very accurate.
Anyway, I’ll analyze the distance case presented. I won’t use that type of detector because FACTORY IO doesn’t offer it, but I can use computer vision, even though I don’t know how or where to use it. This is the second time I’ve tried this here—finding the solution first, then finding the problem.
Detecting an object that is closer than a certain threshold or farther than a certain threshold is functionally equivalent to detecting an object that is between a lower and a higher threshold and inverting the signal.
@justin.anderson, @janbumer1 We could potentially implement this as a new sensor configuration, where a sensor can internally combine multiple conditions (ranges, logic, inversion, etc.) instead of requiring users to manually combine several sensors in the Driver panel. We’ve already added the option to define whether a sensor signal is NO/NC, so extending the configuration system in this direction is definitely feasible.
Before moving forward, it would be useful for us to understand how common this type of functionality is in real-world industrial sensors. From your experience, could you point us to specific real-world sensors or models that support these features? Having concrete references would help us design the feature in a way that aligns with actual industry practice.
Hey Bruno, see for example the Sick WTT12, if I remember correctly.
With that sensor, you’re able to set up to 8 different fixed sensing distances, and you can choose two different detection distances, within which you’ll get a signal output.
Applications can be, as mentioned, positioning within a certain distance. We also used those for a rudimentary object differentiation, where we used the multiple different sensing distances to differentiate between different object types.
We used to utilize a similar sensor as the SICK unit, but we have semi recently migrated to Leuze. This is the family of sensors that we utilize the window mode with. (It’s been a minute since I was the guy in the field, now I am mostly behind a screen)
It’s handy when you don’t have the ability to add an analog or IO Link or a networked sensor. I wouldn’t say it’s the most common sensor we have. But I do have a few of them per site. I think if you made the option to allow a sensor be invisible to the physics engine, I can do it with a diffuse sensor and the inversion property. (I can set the detection distance already) It’s a limitation in the real world where I can’t physically put a sensor, so I have to mute certain areas. I can only mute the physics engine of the real world for precisely 1 pallet. HAHA.
amjavi, I didn’t look that close at the image (I have a few things going on), there is only one detector. Detector 1. The sensor will mute internally, and only send a single digital input to the PLC if it detects the leading edge of an object within those ranges. In FactoryIO, If I could combine the sensors within FIO, then I could add the 2nd sensor as you shown in the image and combine the signals in the drivers screen to the PLC. Just in the real world, there are situations where you can’t physically mount things in those places. It also is a question of costing, can I do it with a vision system, sure, but that’s many thousands of dollars and super complexity for something a simple sensor can do internally. The real world isn’t as cut and dry as FIO, we have to make sacrifices depending on the type of machine we’re dealing with.
We’ll get to the topic as soon as I finish the video and upload it to YouTube. I created it using FactoryIO, and you’ll see it’s not expensive and can be simulated with FactoryIO.
What caught my attention in your story is the idea of physically measuring distances, that is, using numbers that can be used in a PLC. That’s what I found most interesting about your post, and that’s why we’re going to implement it with FactoryIO. Obviously, it has its limitations, but it will be a great project for students to see that there’s more to life than just an elevator.
We’ll keep you updated and discuss it, since you’re the one who started this whole thing, as I’ve never used that type of detector myself.
Hi Justin Anderson, we can now discuss this topic. I’ve replaced the sensor with a camera that costs around €50. The camera’s software is installed on the PC, and the camera sends the image via IP. We process this image within certain proportions or scales, and the result is what you see in these videos.
Note that no external libraries are used; only the colors of the pixels received in the image are measured. In other words, it’s NOT computer vision; it’s image processing, which I believe is more than sufficient in this case.
Furthermore, since no specific use has been presented for this case, I’ll invent one myself, and it will be very interesting.
Also, this can be done in many other ways, but we’re here to program for the sake of programming.
And one more thing: everything you see is possible thanks to FactoryIO. Otherwise, Justin Anderson could present his idea, but it couldn’t be developed, and therefore, we wouldn’t be able to learn about this topic.