Pekat Vision Software Light
Highlights
Pekat Vision software contains the right set of self-learning tools. These tools can be combined and interwoven with a scripting code. Our experience has shown that exactly these tools together can tackle practically any vision task in manufacturing. Pekat Vision is software for industrial visual inspection and quality assurance based on AI. Pekat Vision is able to check products or input raw materials that have an unstable look or shape. Pekat Vision is universal. It can be used with all kinds of materials such as wood, stone, metal, lacquer, castings, leather, rubber, or fabrics. Pekat Vision can work either in the so-called unsupervised mode. It is then able to find defects which it didn’t see before. It is even enough to train it on images of defect-free objects or material. Or it can work in a supervised mode – it can be trained to search for a specific defect or surface problem – e.g. scratches, rust, leaks, holes, etc. It does not matter that each product or material is slightly different and it does not matter whether it is difficult or even impossible to describe what is a defect, Pekat Vision is able to find it anyway.
Pekat Vision Software Features
- Unsupervised anomaly detector can be trained by positive (error-free) examples. It is enough to provide just a few example images. It is able to inspect stable objects or even completely unstable surfaces like deformed textiles with patterns.
- A supervised surface check is a tool that can be trained to find defects on completely heterogeneous surfaces. Examples of such defects: rust, abrasion, leakage, etc.
- Object detector and classifier can find objects with unstable shape – e.g. knots on wood. It doesn’t even matter if the objects are rotated.
- Inspection modules can be combined into a complex flow and even interwoven with custom image preprocessing or scripting code.
- OCR (Optical character recognition) module is used for finding individual characters or words in the image.
- Unifier can be used when objects which are to be inspected are rotated or the position is scattered in the images. Unifier unifies the position and rotation of the objects in the images for further processing.
- A preprocessing module is a tool for easy image editing before the next processing. It allows rotation, cropping, scaling, background normalization, background removal, and many more transformations.
- Auto sensitivity is a part of the anomaly detection module and determines the best sensitivity value and can ideally find the border between a good product and defective products.
- Measurement module serves for simple measurement of dimensions of the object.
- Runtime statistics show statistics of OK and NOK images sent from API according to the specific date and time you choose.
- Statistics module calculates how successful the application is at evaluating images. Shows confusion matrix and related metrics and also min, max, and average processing times.
- Report generator is a part of the Statistics module. It automatically generates HTML report including all the information from statistics, plus training images (if chosen), evaluated testing images, and a graph of used modules.
- Python code gives you high flexibility. You can e.g. preprocess images (e.g. with OpenCV and Numpy), add custom logic, or even call external interfaces.
- Output can be used to trigger an action once an image from the camera is processed (All/Good/Bad). You can use the command line (e.g. to run a script), send an HTTP request (GET or POST), or establish a connection using Profinet or TCP protocol to your PLC.
Pekat Vision Light
- Object Detection
- Classification
- Measurement
- OCR
- Image Preprocessing
- Rotation Unifier
- Validation Statistics
- Runtime Statistics
- Live Camera View, Operator View
- Inspection and Debugging
- Python Code
- Output
- Report generator
- REST API, SDK
Recommended Hardware
Runtime
- For typical requirements – fast processing and convenient usage we recommend to use NVIDIA GPU with enough memory. A rough estimation is 3GB per deep-learning module. To be on the safe side we recommend using NVIDIA GeForce® RTX 2080 Ti.
- For price-sensitive use-cases, it is possible to use a cheaper NVIDIA GPU card or to use a computer without a GPU card at all. It has to be expected for longer recognition times.
- For embedded use-cases we support embedded hardware too – e.g. ARM-based devices like NVIDIA TX2 or Xavier or some types of FPGA. Contact us for more information.
Development
- For the training of deep-learning modules, we recommend using fast NVIDIA GPU with a lot of memory. We recommend using NVIDIA GeForce® RTX 2080 Ti. Plus we recommend a PC with at least 16 GB of RAM.
- Training of deep-learning modules without GPU technically works but we strongly discourage this as the training times can take hours or even days.
Do you need a GPU?
- If you use or train modules of Anomaly of Surface or Surface Detection you can use a PC without GPU.
- If you use modules for Images Preprocessing, Measurement, or Code you can safely use PC without GPU.
- If you are going to use modules for Classification of Object Detection we strongly recommend using GPU for training. It is possible to use PC without GPU for image recognition (inference) however the inference times would be significantly higher.
Integrating Pekat Vision
The camera is directly connected to Pekat Vision (GenICam camera) This option is the easiest. Pekat Vision processes input from the camera in a loop. You have to enable the camera and set what should happen after evaluation (output). Pekat Vision is middleware, you use additional software for capturing images or processing. This method has 3 steps and requires programming knowledge.
Send images to PEKAT
- You can use SDK from https://github.com/pekat-vision/. We support Python, C#, and C++ programming languages.
- You can use the LabVIEW plugin http://sine.ni.com/nips/cds/view/p/lang/en/nid/218329
- You can send images to HTTP API. We created some examples.
Pekat processes the image.
- It returns data structure (JSON) which describes the result.
Your script from step 1 processes the response.
- In the response, there could be information about objects which were found or overall evaluation. You can use the inspection tab for debugging.