Machine Vision Technology Forum - Thank you!

A GREAT SUCCESS >

Machine Vision Technology Forum 2017

in Germany

Read more

Machine Vision Technology Forum 2017 - Schedule & presentations

Here you find an overview of all sessions offered at our Machine Vision Technology Forum in Germany.

Get an overview about all presentations and choose your presentation program for the two days of the event at the online registration.


SCHEDULE for 17th and 18th October 2017 in Unterschleissheim


DE = German session | EN = English session

17th of October 2017

10:30-10:55
11:00-11:25
11:30-11:55
12:00-13:25
Lunch break
13:30-13:55
14:00-14:25
14:30-14:55
15:00-15:40
Coffee break
15:45-16:10
16:15-16:40
16:45-17:10
18:30
Evening event

18th of October 2017

9:00-9:25
9:30-9:55
10:00-10:25
10:30-11:10
Coffee break
11:15-11:40
11:45-12:10
12:15-12:40
12:45-14:10
Lunch break
14:15-14:40
14:45-15:10
15:15-15:40

HANDS-ON ABSTRACTS


BOA Hands-on: creating machine vision applications in Teledyne DALSA BOA smart cameras

STEMMER IMAGING

Creating machine vision applications using Teledyne Dalsa BOA smart cameras | iNspect Express provides capabilities for an extensive range of inspection tasks, such as positioning, identification, verification and measurement | Hardware basics on the Dalsa BOA Smart camera | Introduction to BOA software iNspect | Creating sample applications

LMI Hands-on: 3D measurement using Gocator, ‘a Step by Step guide’

STEMMER IMAGING

When to use 3D technology | Learn what settings work best for your application | Learn how to solve 3D tasks with the innovative user interface | Configure measurements tools | Select the output which best suits your application | Work with multiple sensor systems

Sherlock Hands-on: how to create your first machine vision solution in Sherlock multi camera software

STEMMER IMAGING

When to use Sherlock | The basics of Sherlock | What pre-processors to use | What algorithms to use | How to communicate your program to the outside world, PLC, SPS or Robot | A brief look at creating a graphical user interface for your application


PRESENTATION ABSTRACTS


Fast machine vision solution development for IIoT-based smart factories

Adlink

17.10. 16:15-16:40 | 18.10. 12:15-12:40 | English sessions

Speaker: Alex Liang

In the arena of manufacturing, emerging Internet of Things (IoT) technology has ushered in the Industry 4.0 Initiative, migrating from conventional automated production to IoT-based intelligent automation, replacing semi-automated or standalone automatic machining with network-connected processes based on M2M (machine to machine) and M2P (machine to person) communication. This, in combination with corporate information systems and analytics, creates endless possibilities for the smart factory model.

As the IIoT-based smart factory initiative encourages manufacturers to actively implement smart automation, machine vision has become indispensable to the quality control of automated production. A solution providing fast and easy development of machine vision applications is a key factor in empowering IIoT-based smart factories.

ASIC – Utilisation in camera design for machine vision

Allied Vision

17.10. 10:30-10:55 | 18.10. 10:00-10:25 | German sessions

Speaker: Jochen Braun

Before the ALVIUM technology was launched all machine vision cameras had relied on FPGAs for their brains. ASICs are widely used in many electronic devices, so why had they never been used in a machine vision camera?

This presentation will explain the difference between FPGAs and ASICs and will detail the advantages and disadvantages of both technologies relating to the design and manufacture of machine vision cameras.

Data from comparisons of real cameras will be shown to give a practical demonstration of the differences in real world applications.

MIPI CSI-2 – A new camera interface for embedded machine vision systems

Allied Vision

17.10. 13:30-13:55 | 18.10. 12:15-12:40 | German sessions

Speaker: Jochen Braun

Before 2017 there were no machine vision cameras on the market using the MIPI CSI-2 interface and many applications have been successfully solved with the established interfacing methods (GigE, USB3, etc.). So, why do we think there is the need for a new interface? Where has this “new” interface come from? What are the advantages? What kind of applications is it good for? Who should use it? Who shouldn’t use it?

This presentation will provide an introduction to MIPI CSI-2 and will show a comparison to other commonly used machine vision interfaces. It will show the prevalence of CSI-2 on embedded processing boards and will explain the engineering design considerations that will help users decide if they should develop their next system with this interface technology.

Schnelle und flexible Bildsensoren eröffnen neue Anwendungsgebiete in der Inline-Inspektion

AIT Austrian Institute of Technology GmbH

17.10. 11:00-11:25 | 18.10. 12:15-12:40 | German sessions

Speaker: Ernst Bodenstorfer

Heute sollen optische Oberflächeninspektionssysteme eine steigende Vielfalt von physikalischen Eigenschaften erfassen, bewerten, messen und klassifizieren. Beispiele sind das Erfassen von 3D Oberflächenmerkmalen, hyperspektrale Bildverarbeitung für robustere Materialklassifikation, Scannen von Oberflächen aus Materialien mit speziellen bidirektionalen Reflexionseigenschaften wie z.B. glänzende oder halbtransparente Oberflächen, Oberflächen mit holographischen Effekten oder Oberflächen mit richtungsabhängigen Farbeffekten.

Um zunehmend mehr Oberflächeneigenschaften bei wachsenden Geschwindigkeits- und Echtzeitanforderungen erfassen zu können, müssen Bildsensoren schneller und flexibler werden. Der Fokus des Vortrags liegt auf den neuartigen Anwendungen, die durch eine schnelle und flexible Multi-Line-Scan Technologie ermöglicht werden.

Industrie 4.0 - Kommunikation über OPC UA (Standardisierte Mechanismen für Konfiguration, Diagnose und Datenaustausch)

ascolab GmbH

17.10. 14:00-14:25 | 18.10. 9:00-9:25 | German sessions

Speaker: Uwe Steinkrauss

Open Platform Communications Unified Architecture (OPC UA) ist ein neuer Schnittstellenstandard für das Zukunftsprojekt Industrie 4.0. Im Gegensatz zu anderen Standards liegt der Fokus bei OPC UA darauf, neue Verbindungen zwischen bisher getrennten Ebenen zu schaffen und damit die Kommunikation in die Vertikale zu erweitern. Das bedeutet, dass nun auch Einheiten aus höherliegenden Schichten der Automatisierungspyramiden mit Bildverarbeitungssystemen kommunizieren können.

So können abstrakte Ergebnisse wie auch Rohdaten an SPS, MES oder ERP übertragen werden. Im Gegenzug lassen sich so wiederum Bildverarbeitungssysteme konfigurieren und steuern. Aktuell nutzen Bildverarbeitungssysteme viele verschiedene proprietäre Schnittstellen, was die Komplexität und Entwicklungskosten erhöht. Eine systemübergreifende gemeinsame Schnittstelle würde die Fusion von Bildverarbeitungssystemen mit weiteren Sensoren erleichtern.

Optimised design of 3D laser triangulation systems

Automation Technology

17.10. 14:00-14:25 | 18.10. 9:30-9:55 | German sessions

Speaker: Dr.-Ing. Athinodoros Klipfel

As 3D laser triangulation is used more and more by system integrators and OEMs in the development of industrial inspection systems. Questions arise on how to optimally design the scanning setup to fulfill the application requirements.

The presentation provides a brief guideline for the selection of components (camera, lens, laser), setting up the right geometry (triangulation angle, working distance), the application of suitable algorithms for 3D line detection in the camera image and the estimation of 3D scan properties such as precision and profile speed. Issues like the use of camera Scheimpflug adapters, the choice of laser wavelength and the calibration of the 3D setup will be part of the discussion. Furthermore, the all-in-one 3D compact sensors will be presented as an alternative to the discrete camera-laser triangulation setup and their specific characteristics will be compared.

Mustergültige Abnahme von Bildverarbeitungssystemen – Validierung als neue Erfolgsmethode

attentra GmbH

17.10. 14:00-14:25 | 18.10. 11:45-12:10 | German sessions

Speaker: Christian Vollrath

Bildverarbeitungssysteme finden in modernen Fertigungen immer häufiger Verwendung. Doch wie lässt sich ein Bildverarbeitungssystem korrekt validieren? Wie bestimmt man die Prozessfähigkeit bei einem nicht messenden optischen System, beispielsweise wenn der Verbau einer Variante eines Bauteils überprüft werden soll?

Der einfache Missbrauchstest (Kreuztest), wie er häufig gefordert wird, ist ein denkbarer Ansatzpunkt, jedoch bleiben dabei statistische Einflussfaktoren wie schwankende Werkstoffbeschaffenheit unbeachtet. In diesem Vortrag soll zu diesem Thema ein erster Denkanstoß geliefert werden, ohne dabei zu tief in die statistische Mathematik einzudringen.

Challenges of installing and operating machine vision components in diverse industries

autoVimation

17.10. 16:45-17:10 | 18.10. 14:15-14:40 | German sessions

Speaker: Peter Neuhaus

Machine vision applications advance to markets with increased requirements for the protection of cameras, lights, lasers and other machine vision components: Applications areas such as the automotive industry, metal, glass and paper industries, food, medical, pharmaceutical industries, sport, entertainment sectors and traffic or solar industry, all have different requirements.

This presentation discusses the technical and legal requirements for machine vision applications in diverse industries. It also covers technical solutions that enable safe and compliant installation and operation of cameras and other components in challenging environments.

Digitalisierung in der Praxis

CANCOM

Teil 1 | 17.10. 16:15-16:40 | 18.10. 10:00-10:25 | German sessions
Teil 2 | 17.10. 16:45-17:10 | 18.10. 11:15-11:40 | German sessions

Speaker: Werner Schwarz

Modulare IoT Bausteine - vom Intelligent Edge zu IoT Plattformen

Dieser Vortrag beschreibt welche IoT Bausteine zur Digitalisierung unterschiedlicher Unternehmensbereiche notwendig sind und wie diese entlang einer individuellen Digitalisierungs-Roadmap schrittweise eingeführt werden können.

Folgende Themen werden behandelt:

  • IoT Bausteine – Schritt für Schritt zur eigenen Digitalisierungs-Roadmap
  • IoT Edge Computing – IoT, Bild und Video Daten erheben & konsolidieren
  • IoT Netzwerk & Security – Konnektivität & Sicherheit als Basis der Digitalisierung
  • IoT Plattform – Agilität, Innovation & neue Business Modelle
  • IoT Praxisbeispiele

Getting the best image for your vision application with computational imaging

CCS

17.10. 10:30-10:55 | 18.10. 9:00-9:25 | English sessions

Speaker: Steve Kinney

By creating an output image focused on the image properties that are most important to a particular machine vision task, computational imaging (CI) offers powerful advantages over traditional one shot imaging. Relying on data extracted and computed from a series of input images captured under different lighting or optical conditions, computational imaging techniques outperform traditional image acquisition.

This presentation covers six practical examples of CI solutions for machine vision applications, including photometric stereo (also known as shape from shading), super-resolution colour, high dynamic range (HDR), extended depth of field (EDOF), bright field/dark field, and multi-spectral imaging. CCS computational imaging solutions simplify the hardware, timing and acquisition to easily bring the benefits of CI to any application.

The most underrated component in selecting an interface….the cable!

CEI

17.10. 15:45-16:10 | 18.10. 14:45-15:10 | English sessions

Speaker: Steve Mott

How to know when and why you are choosing the correct interface standard for your application. Understand the important limiting factors for your proposed vision system.

Das neue, einheitliche EMVA 1288 Release 3.1 Kameradatenblatt: zuverlässig, schnell und flexibel Kameras vergleichen

EMVA 1288

17.10. 16:45-17:10 | 18.10. 14:15-14:40 | German sessions

Speaker: Prof. Dr. Bernd Jähne

Die wichtigste Neuerung der Release 3.1 des weltweit bewährten und benutzten Standards 1288 der European Machine Vision Association (EMVA) ist ein einheitliches Datenblatt, das auf einer Seite alle wesentliche Informationen zu einer Kamera zusammenfasst.

In diesem Vortrag wird das neue Datenblatt erläutert und mit einer Reihe praktischer Beispiele anschaulich gezeigt, wie sich damit flexibel die beste Kamera für verschiedene Anwendungsszenarien finden lässt. Im Ausblick wird dargestellt, welche Erweiterungen des Standards in Arbeit sind: „shutter efficiency“ und die Anbindung der Kamera an die Optik.

Colour identification, colour quantification and hyperspectral imaging – VIS and NIR inspection

European Imaging Academy

17.10. 11:30-11:55 | 18.10. 11:15-11:40 | German sessions

Speaker: Lars Fermum

The lecture deals with the basics and technologies for colour differentiation of objects. Starting from simple colour recognition with conventional colour cameras and evaluation in the RGB colour space, we will discuss the advantages of other methods such as the HSV colour space or the spectral quantification in the CIE XYZ colour space.

How does the human eye perceive colours and levels of brightness, and what is colour metramery? What are the identification capabilities of RGB colour cameras, multi camera systems and hyperspectral imaging systems? What can be inspected and detected in the visible and in the IR range? The lecture also addresses the chemical-physical effects that are used for evaluation purposes.

Colour inspection, Infrared and UV – tips, special features, requirements

European Imaging Academy

17.10. 11:00-11:25 | 18.10. 14:45-15:10 | German sessions

Speaker: Lars Fermum

Colour, IR- and UV illumination can be used in combination with monochrome or colour cameras in order to visualise the diverse inspection features. In addition to the illumination technology basics we address topics such as the spectral sensitivity of camera sensors, suitable lenses, optical filters, and other subjects such as the screening of extraneous light.

Inspecting transparent objects

European Imaging Academy

17.10. 14:30-14:55 | 18.10. 14:15-14:40 | German sessions

Speaker: Lars Fermum

Transparent materials such as glass, film, plastics, adhesive film or liquids have proven again and again to be difficult test objects.

However, the right optical set-up, an appropriate lighting method or inspection technology can solve these problems, in order to locate objects or detect surface defects, cracks, edge chipping and impurities.

What makes a lens a “good lens” for machine vision?

Fujionfilm

17.10. 10:30-10:55 | 18.10. 11:45-12:10 | English sessions

Speaker: Naoki Nishimoto

A lens is the first part of an imaging system and therefore needs to deliver decent image quality. But what makes a lens a “good lens”? What makes the difference in lens design and lens production?

The presentation explains the challenges in optical design and how Fujinon lenses are optimized to deliver best image quality.

Five reasons to use a lighting controller

Gardasoft

17.10. 13:30-13:55 | 18.10. 11:45-12:10 | English sessions

Speaker: Martin House

Do you want to improve the reliability and repeatability of your machine vision applications? Do you need to increase the speed? Will your system need maintenance when the lighting becomes less bright?

Lighting controllers give more stable light output and can give remote control of the lighting so that the brightness can be maintained as the light gets older and less bright. Overdriving is a powerful technique to get more brightness from LED lighting. High speed synchronisation of lighting enables multi-light applications whilst maintaining high line speeds.

This presentation explains the features available in lighting controllers and how they can be used to improve the capability, speed and reliability of your machine vision system. It then describes the latest techniques and features available, including how you can save cost and complexity by reducing the number of inspection positions in a machine.

Robust und flexibel – neuartige Lichtfeldverfahren für die Robotik und industrielle Inspektion

HD Vision Systems GmbH

17.10. 15:45-16:10 | 18.10. 14:45-15:10 | German sessions

Speaker: Dr. Christoph Garbe

Die 3D Vermessung von komplexen Objekten mit glänzenden Oberflächen stellt eine Herausforderung für die optische Messtechnik dar. Die HD Vision Systems GmbH als Ausgründung aus der Universität Heidelberg hat einen innovativen lichtfeldbasierten Ansatz entwickelt, der sehr robust und flexibel einsetzbar ist.

Die Hardware/Software Lösungen werden hohen Anforderungen an die Messgenauigkeit, die Messgeschwindigkeit und die Robustheit gerecht. In diesem Beitrag werden die technologischen Grundlagen vorgestellt und Anwendungen aus dem Umfeld der Industrie 4.0 und Robotik beispielhaft dargestellt.

Intel RealSense™ depth camera technology review for acquisition of 3D data

intel

17.10. 14:30-14:55 | 18.10. 9:30-9:55 | English sessions

Speaker: Miroslav Mlejnek

This presentation outlines the latest technical innovations in modular and robust Intel RealSense™ technology, which combine passive/ active stereo camera technology and SLAM tracking modules to realise advanced solutions in such as like robotics, drones, consumer electronics and video analytics.

Top 10 Bildverarbeitungstrends

InVision

17.10. 10:30-10:55 | 18.10. 11:45-12:10 | German sessions

Speaker: Dr. Peter Ebert

Die Möglichkeiten Bildverarbeitung zu nutzen werden immer umfangreicher. Daher ist es nicht immer einfach, den Überblick über aktuelle Entwicklungen zu behalten. Der Vortrag gibt aus Sicht eines Redakteurs des Bildverarbeitungsmagazins inVision einen kurzen und unabhängigen Überblick, welche Trends es wert sind, sie etwas eingehender zu betrachten. Holen Sie sich in diesem Vortrag Anregungen und Ideen für Ihre nächsten Applikationen.

Colour imaging: Getting the best out of multi-sensor prism cameras

JAI

17.10. 14:30-14:55 | 18.10. 11:45-12:10 | German sessions

Speaker: Christian Felsheim

Multi-sensor prism cameras offer significant advantages to colour imaging. Bayer, multi-linear colour line scan and hyperspectral sensors featuring colour filters on top of each single pixel block-out most of the light falling onto the sensor. In contrast, multi-sensor cameras don’t block but separate the light with the use of dichroic prisms: Thus (almost) no light gets lost. Advantages of this design include better signal-to-noise ratio, higher colour contrast, much lower crosstalk between colour channels, lower of colour interference effects as often seen in Bayer images as well as a lack of halo effects as often seen in images taken with multi-linear line scan cameras.

Furthermore, due to the nature of the multi-sensor camera design, every colour channel can be adjusted in gain and exposure time separately. As a result, images show much higher dynamic range, contrast and signal-to-noise ratios across the whole colour bandwidth. For multi-spectral applications, channel separation can be customised by adjusting the dichroic coating of the prisms.

Robotic as a Service

KUKA

17.10. 13:30-13:55 | German session

Speaker: Heinrich Munz

Beim Internet der Dinge (oder Internet of Things, IoT) respektive dem für Produktionszwecke gemachten Ableger namens „Industrie 4.0“ steht heute meistens das reine Datensammeln im Vordergrund: Die „Dinge“ (Sensor, Kamera, Aktor, ganze Maschine, etc.) produzieren während ihrer eigentlichen Aufgabe Daten, die zunächst in Informationspakete komprimiert und der Cloud zugeführt werden müssen. Dort werden sie mit Technologien wie z.B. Machine Learning gesammelt und ausgewertet.

Prozessoptimierungen, vorausschauende Wartung usw. werden dadurch möglich. Die Daten und Informationen werden dabei entweder direkt oder über so genannte „Edge Gateways“ in die Cloud übertragen. Dieser rein datenzentrierte Ansatz greift jedoch zu kurz und wird dem Potential der neuen Architekturebenen HMI-Apps/Cloud/Edge/Thing nicht gerecht.

Wendet man sich von der bisherigen Datenorientierung ab und der Serviceorientierung im Sinne von SOA (Service Oriented Architecture) zu, kommt man zwangsläufig zur „Robotic as a Service“ (RaaS). Und zwar im doppelten Wortsinn: Zum einen ergeben sich neue Business-Modelle, indem beispielsweise Roboter nicht mehr verkauft, sondern den Kunden beigestellt und per Bewegung abgerechnet werden („Pay per Move“). Zum anderen kann jedes „Ding“ – auch Kameras - als Funktions-Server aufgefasst werden, welches seine Dienste der ansteuernden Instanz „über“ ihm – also der Cloud bzw. der Edge - zur Verfügung stellt. Der Roboter wird dadurch zum „Bewegungs-Server“, der nicht mehr wie heute für sich programmiert wird, sondern seine Bewegungsbefehle – genau wie alle anderen am Prozess beteiligten Automatisierungsgeräte wie z. B. Kameras - zentral aus der Edge als Service-Aufrufe erhält.

Smart customization of 3D sensors with application specific algorithms

LMI Technologies

17.10. 10:30-10:55 | 18.10. 10:00-10:25 | German sessions

Speaker: Christian Benderoth

Speziellen Anwendungsanforderungen gerecht zu werden ist eine der Herausforderungen auf dem heutigen 3D-Sensoren-Markt. Für eine erfolgreiche Fertigung zählt heute nicht nur Geschwindigkeit, sondern auch eine hohe Präzision in der Qualitätskontrolle durch akkurate und verlässliche Messdaten. Um dies zu erreichen, zeigt LMI wie man 3D-Smart-Sensoren durch verschiedene Maßnahmen auf individuelle Bedürfnisse zuschneiden kann, um die unterschiedlichsten Situationen zu meistern.

Anhand von Beispielen, erklärt LMI wie man z.B. mehr Kontrolle über den Inspektionsablauf gewinnen kann, indem Softwareentwickler ihre eigenen Anwendungen in einer sicheren Offline-Umgebung testen können, ohne einen physischen Sensor zu benötigen. Auch das Arbeiten mit großen 3D-Punktwolken, kann durch die Nutzung der Rechenleistung von einem oder mehreren PCs für die Datenverarbeitung vereinfacht werden.

Außerdem erläutert der Vortrag wie Nutzer mit Hilfe von Cross-Compiler-Tools und Programmierschnittstellen die Möglichkeit haben eigene benutzerspezifische Messalgorithmen zu entwickeln, die direkt auf 3D-Smart-Sensoren aufgespielt werden können. Dies erweitert die Funktionalität des Sensors und bietet die Flexibilität, die in einem sich schnell wandelnden Umfeld benötigt wird.

Multi-spectral, SWIR and hyperspectral, next generation of LED illumination

Metaphase

17.10. 11:00-11:25 | 18.10. 10:00-10:25 | English sessions

Speaker: James Gardiner

The next generation of LED Illumination is upon us! Metaphase Lighting is bringing the next generation of LEDs, optics, and driver controls technology to machine vision today. This presentation will cover the technology behind the latest LED Illumination that allows vision systems to extract more information than ever before. New driver and optic blending technologies allow Multispectral LED solutions to incorporate more wavelengths in a single light source. High Powered SWIR (short-wave Infrared) LEDs and optics expand the inspection capabilities for Line Scan & Area Scan applications. We also go into how LED technology is playing a role in Hyperspectral Imaging.

Filters for machine vision by machine vision

Midopt

17.10. 11:00-11:25 | 18.10. 9:00-9:25 | English sessions

Speaker: Georgy Das

Optical filters are critical components of machine vision systems. They’re used to maximise contrast, improve colour, enhance subject recognition and control the light that’s reflected from the object being inspected. Learn more about the different filter types, what applications they’re best used for and the most important design features to look for in each. Not all machine vision filters are the same. Learn how to reduce the effects of angular short-shifting. Discover the benefits of filters that emulate the bell-shaped spectral output curve of the LED illumination being used. And find out more about the importance of a high-quality inspection process that limits the possibility for imperfections and enhances system performance.

Industrial applications of the line scan bar

Mitsubishi Electric

17.10. 15:45-16:10 | 18.10. 11:45-12:10 | German sessions

Speaker: Jan Friedrich

Besides the main use for inspections in the printing industry, the line scan bar technology is an ideal solution for surface inspection tasks of web material in various industries such as wood, glass, solar or electronics manufacturing.

Correct combination of liquid lenses with endocentric and telecentric optics

Optotune

17.10. 11:30-11:55 | 18.10. 9:30-9:55 | German sessions

Speaker: Mark Ventura

Liquid lenses are a great technology for fast and reliable focusing. As the vast diversity of options for sensors and optics can make the selection of components challenging, the goal of this talk is to provide simple design guidelines and examples so that you can make liquid lenses a practical part of your imaging toolbox.

Big image data - Smart image data

Optronis

17.10. 11:30-11:55 | 18.10. 14:15-14:40 | German sessions

Speaker: Dr. Bernd Reinke

High-speed cameras are devices that generate extremely high data volume per unit of time. In the recording of single events, typical for slow-motion, data is predominantly unstructured (Big Image Data); on the other hand there are machine vision tasks that work mainly with structured image data (Smart Image Data) that leads to an image result after certain image processing. Due to the high datastream a new approach is required when selecting machine vision components during the design-in phase (to be able to use classical image processing tools during deployment).

Chemical Colour Imaging … makes hyperspectral cameras ready for the factory floor

Perception Park

17.10. 11:00-11:25 | 18.10. 15:15-15:40 | German sessions

Speaker: Lukas Daum

Vibrational spectroscopy is based on the fact, that molecules reflect, absorb or ignore electromagnetic waves of certain wavelengths. Hyperspectral sensors measure those responses and return a spectrum per spatial point as the chemical fingerprint of a material. This data requires extensive processing to be useable for vision systems.

In this presentation, Perception Park explain how the hyperspectral camera technology and image processing can be combined/used in a HSI solution. Chemical colour imaging methods transform hyperspectral data into image streams. These streams can be configured to highlight chemical properties of interest and are sent to image processing systems via protocols like GigE Vision. Vision systems are extended to see additional chemical properties without the need of further development. Thus, objects, which only differ in their chemical properties, can now be separated from each other. Applications: Recycling, food safety, Quality Assurance (e.g. Pharma, Food and Packaging), colour measurement etc.

The industrial application of hyperspectral imaging is especially challenging in terms of sensor correction, repeatability and processing speed. Due to this, highly configurable tools are required which can fully utilise the sensor's capacity, due to GPU accelerated processing.

Microscope systems and their application possibilities in image processing

Qioptiq

17.10. 13:30-13:55 | 18.10. 10:00-10:25 | German sessions

Speaker: Thomas Schäffler

Microscopes are designed for use in the laboratory to look at small structures and objects. But for machine vision such systems are inefficient." Once upon a time this was the case, but this thought still haunts people today. In fact, there are now microscope systems that were specially developed for machine vision and, compared to lenses with finite imaging, not only offer higher resolutions but also open up new approaches to inspection procedures.

Optische Kohärenztomographie (OCT) als neue Imaging-Technik zur Überwachung pharmazeutischer Beschichtungsprozesse

Research Center Pharmaceutical Engineering GmbH

17.10. 16:15-16:40 | 18.10. 15:15-15:40 | German sessions

Speaker: Matthias Wolfgang

Optische Kohärenztomographie (OCT) ist eine berührungslose, zerstörungsfreie und hochauflösende Visualisierungstechnik basierend auf niederkohärenter Interferometrie. Anwendung findet diese Technologie bislang vor allem in der Medizin, speziell in der Augenheilkunde.

Am RCPE wurde in Zusammenarbeit mit RECENDT ein Messgerät – basierend auf der OCT-Technologie zur in-Line-Überwachung von Beschichtungsprozessen für pharmazeutische Filmtabletten und Pellets – entwickelt. Dabei werden tomografische Querschnittsbilder der Beschichtungen aus einem Spektral-Domäne-OCT-Gerät (SD-OCT) in Echtzeit direkt aus den Interferogrammen erstellt und mit speziellen Algorithmen ausgewertet. Durch die hohe Sampling-Rate ermöglicht diese Technik eine direkte Messung der Beschichtungsstärke an einzelnen Pellets / Tabletten während des Beschichtungsprozesses – anstatt nur durchschnittliche Werte bereitzustellen. Der Anwender erhält damit in kürzester Zeit viel mehr Information über die Beschichtungsstärke, Qualität und Variabilität im Vergleich zu Standard-Qualitätskontrollverfahren.

Auch im Vergleich zu anderen optischen Methoden wie beispielsweise Raman-Spektroskopie oder Terahertz-Imaging können Vorteile von OCT in Bezug auf Geschwindigkeit und Ortsauflösung demonstriert werden. Der hohe Informationsgehalt zusammen mit der Echtzeit-Verfügbarkeit von Messdaten unterstreicht die hohe Leistungsfähigkeit der OCT-Technologie, und bietet einen direkten Mehrwert zum Prozess-Verständnis und zur Kontrolle von Fertigungsprozessen in der pharmazeutischen Tablettenbeschichtung.

Shape from Shading - Automatisierte 100% Prüfung von Oberflächen im Vergleich zu anderen 3D-Verfahren

SAC Sirius Advanced Cybernetics GmbH

18.10. 15:15-15:45 | in deutscher Sprache

Referent: Johannes Zahn, SAC Sirius Advanced Cybernetics GmbH

In vielen Bereichen der automatisierten Oberflächeninspektion ermöglichen 3D-Verfahren ein sicheres Erkennen von funktionskritischen Fehlern. In diesem Vortrag werden verschiedene optische 3D-Verfahren präsentiert und mit den Eigenschaften der Shape from Shading-Technologie verglichen. Zusätzlich findet eine Betrachtung der im Shape from Shading-Verfahren etablierten Beleuchtungsansätze statt. Dabei liegt der Schwerpunkt auf der schnellen und zuverlässigen Prüfung feinster topografischer Defekte im µm-Bereich, besonders bei anspruchsvollen Oberflächen. Dies wird an praxisnahen Aufgabenstellungen verdeutlicht.

The importance of wavelengths on optical designs

Schneider Kreuznach

17.10. 14:00-14:25 | 18.10. 11:15-11:40 | German sessions

Speaker: Steffen Mahler

Today's machine vision applications call for a much greater specialization of lenses for different wavelength ranges. Requirements range from monochrome illumination for metrology applications and daylight or NIR illumination for outdoor applications up to hyperspectral technologies in SWIR. Lenses to be used in such applications therefore require different kinds of colour correction to be considered during the design phase of a lens.

Illumination sequences for surface inspection: High-speed mechanisms for image acquisition and pre-processing

Silicon Software

17.10. 11:30-11:55 | 18.10. 12:15-12:40 | German sessions

Speaker: Björn Rudde

In several industry sectors surface inspection is utilizing industrial machine vision. A variety of algorithms is applied to the image data in order to solve this inspection task.

The detection of tiny surface errors is enabled by using high resolution sensors and leads to high bandwidth demands. Especially the control and acquisition of illumination sequences is a central element in this field of machine vision. Additionally several pre-processing steps need to be performed in an efficient way.

Different mechanisms for image creation are implemented and based on robust image acquisition and processing. How this is realised in practice – especially for high-speed processes – will be covered in this talk.

Use of telecentric lenses for applications beyond standard metrology setups

Sill Optics

17.10. 14:30-14:55 | 18.10. 14:45-15:10 | German sessions

Speaker: Andreas Platz

Why choose a telecentric lens? The advantages of telecentric measurements are obvious: High precision, constant magnification and low distortion, even for objects with a certain depth.

But the range of applications goes beyond that. Telecentric coaxial illumination through the lens improves surface texture evaluation. A telecentric lens combines with a focus tunable lens enables a variable working distance and z-scan. Moreover, telecentric imaging of a tilted object plane results in considerably lower distortion. Furthermore, special sensors and projection tasks need a telecentric beam path. All cases require verification of different factors before system development.

The presentation gives an overview of special optical setups as well as the possibilities and limitations of certain solutions.

IEC 62471 photobiological safety standards for LED lighting products

Smart Vision Lights

17.10. 14:00-14:25 | 18.10. 9:30-9:55 | English sessions

Speaker: Matt Pinter

The machine vision technology forum will provide information to help users understand how to test LED lighting products in accordance to IEC 62471. The information presented will explain the IEC 62471 photobiological safety of lamps and lamp systems for LED lighting. A practical approach will be covered regarding the testing and the classification of an LED light into the proper risk group classification. An overview of the testing procedures, equipment and the IEC 62471 report will also be discussed.

Vorteile der Push-Broom-Technologie für hyperspektrale Bildverarbeitung

Specim Spectral Imaging

17.10. 14:00-14:25 | 18.10. 9:00-9:25 | German sessions

Speaker: Dr. Georg Meissner

Die hyperspektrale Bildverarbeitung ist eine neue Bildverarbeitungstechnologie für industrielle Anwendungen wie z.B. Qualitätskontrolle und Prozesssteuerung. Mehrere hyperspektrale Bildgebungstechnologien bieten sich als Alternativen an. Bei Auswahl der bildverarbeiteten Kamera / Ausrüstung sollten sowohl funktionelle als auch kommerzielle Eigenschaften in Erwägung gezogen werden. Als typische Kriterien für Anwendungen werden in der Regel Wellenlängenbereich, Auflösung, Aufnahmegeschwindigkeit und Return-of-Invest verwendet. Entscheidend für das Anwendungsergebnis ist aber auch der Einfluss der ausgewählten hyperspektralen Technologie für die Ausleuchtung, den Lichtbedarf der Kamera für Signal-Effizienz, Genauigkeit und Qualität der spektralen Daten und die Einfachheit in der Datenverwertung.

Die Präsentation erklärt die entscheidenden Unterschiede der Push-Broom-Technologie und konkurrierenden Technologien, sowie ihren Einfluss auf praktische Industrieanwendungen.

Multi- und Hyperspektrale Bildverarbeitung für Anwendungen in Industrie, Biomedizin und täglichem Leben.

Spectronet

17.10. 16:45-17:10 | 18.10. 9:00-9:25 | German sessions

Speaker: Paul-Gerald Dittrich

Photonische Mikrosensoren und digitale Bildverarbeitung sind bedeutende Schlüsselkomponenten, um Qualität zu messen, zu steuern und zu regeln. Um den wachsenden Erwartungen an die Qualität zu entsprechen, stehen nunmehr miniaturisierte photonische Mikrosensoren für geometrische, kolorimetrische und spektrometrische Messungen zur Verfügung. Neueste Entwicklungen ermöglichen eine gleichzeitige Aufnahme von geometrischen, kolorimetrischen und spektrometrischen Informationen mit spezialisierten Mikrokameras. Diese Kameras werden als Multi-/Hyperspektralkameras benannt.

Der Vortrag befasst sich mit Komponenten (Beleuchtung, Optik, Sensoren, Schnittstellen, Computer und Software) und deren Anwendungen in Industrie, Biomedizin und täglichem Leben und gibt einen Ausblick auf zukünftige Entwicklungen in der mobilen photonischen Messtechnik und modularisierten vernetzten Qualitätssicherung.

3D image processing – From challenge to achievement

STEMMER IMAGING

17.10. 14:30-14:55 | 18.10. 15:15-15:40 | German sessions

Speaker: Maurice Lingenfelder

3D image processing is now a widely accepted part of the automated quality inspection in industrial applications and still experiencing strong growths. It‘s used whenever the limits of classical procedures of the 2D image processing have been exhausted or when highly complex camera systems can simply be replaced by one 3D sensor.

This presentation gives an overview about the current state of 3D image processing. Furthermore, it describes the workflow from the selection of the hardware to the evaluation of received 3D data. Hereby, a particular focus is on the calibration of raw sensor data and the processing of 3D point clouds using variance analysis from a golden sample.

Image processing through the years

STEMMER IMAGING

17.10. 16:45-17:10 | 18.10. 14:45-15:10 | English sessions

Speaker: Dr. Jonathan Vickers

This year Common Vision Blox is 20 years old, what are the changes that have come to the machine vision market over those years? How has the hardware developed and what effect did it have on the software and the systems?

This talk tracks the changes as they affected Common Vision Blox as an independent programming library, from Intel Pentium 2 CPUs running at 233MHz to embedded ARM boards running at 2.4GHz. From TV standard interlaced analogue cameras to 260 megapixels. Learning tools, GPUs and distributed processing. But the biggest change? Standards.

CVB++, CVB.Net and pyCVB – new approaches to state of the art application development with Common Vision Blox

STEMMER IMAGING

17.10. 16:15-16:40 | 18.10. 15:15-15:40 | German sessions

Speaker: Volker Gimple

Over the past 20 years, programming languages, runtime environments and programming techniques have continued to evolve. Today, software developers have a much broader range of tools, platforms and methods than when Common Vision Blox was first launched. Thanks to its C-based procedural API, Common Vision Blox can still be used with virtually every common programming language and on any standard platform, 20 years after its design.

On the other hand, many patterns are tied to high-linguistic resources, and the Common Vision Blox API now needs to be adapted by almost every customer. For this reason, three new APIs are being introduced, so that Common Vision Blox is easier to use with C ++, C# and Python languages, in particular for the creation and debugging of complex applications.

The design of the three language-specific APIs will be presented in a basic and comparative manner. Knowledge of at least one of the three languages is an advantage. The improved possibilities for troubleshooting and connection to common runtime libraries (Qt, WPF, Windows Forms, NumPy) are also discussed. An outline of the current state of the efforts is given as well as an outlook on further development steps.

Hardware and software for embedded machine vision

STEMMER IMAGING

17.10. 15:45-16:10 | 18.10. 10:00-10:25 | German sessions

Speaker: Martin Kersting

The generic term “Embedded Vision“ tells us very little about the hardware and software that encapsulates it. But there are huge differences in the hardware and operating systems involved. Whether it’s Windows IoT on Intel platforms, Linux on TX1 graphics processors or Android on simple ARM architectures, everything is lumped together and labelled “Embedded Vision“.

This presentation covers the diversity in hardware platforms, each with their pros and cons, and provides information on the possible tools for cross-platform application development. In addition, it presents the image acquisition possibilities for such platforms.

Machine vision classifiers – Advantages and challenges of selected methods

STEMMER IMAGING

17.10. 13:30-13:55 | 18.10. 12:15-12:40 | German sessions

Speaker: Frank Orben

Machine vision classifiers can be used for judging objects’ condition or appearance and assigning them class labels. In this presentation, we will discuss the use of ridge regression and convolutional neural networks (deep learning) for classification tasks as well as their advantages and challenges. We will look at the basic theory necessary to get an insight on how these approaches work, why they perform well for certain use cases and where potential impasses lie. Different data sets have been trained and the results will be analyzed regarding classification accuracy and training- and classification times. We will investigate the requirements regarding dedicated hardware and training set sizes. Lastly, we will briefly explore how one of these approaches can also be used for robust scale and rotation invariant object detection.

Why is my system not working? Troubleshooting in machine vision systems

STEMMER IMAGING

17.10. 10:30-10:55 | 18.10. 9:30-9:55 | German sessions

Speaker: Lothar Burzan

What to do when nothing works? Machine vision systems become more and more complex and manifold. It gets more difficult to classify problems as the cause of trouble and the symptoms might be far away from each other.

This presentation shows methods for troubleshooting and gives hints how to avoid mistakes or detect errors early.

Embedded vision for industry – A look at embedded vision solutions for industrial applications

Teledyne DALSA

17.10. 13:30-13:55 | 18.10. 12:15-12:40 | German sessions

Speaker: Steve Geraghty

The term “Embedded Vision” can take on a different meaning depending on who is using it and what the end application is. To some, embedded vision can simply mean the integration of a sensor for the purpose of digitizing an image, whereas to others it could mean deploying smart vision solutions for manufacturing quality control, robotic guidance or logistical movement and tracking of product.

This presentation explores some of these differences and focuses in on embedded vision for industry, including a brief tour of the different industrial embedded solutions available on the market today, reviewing their applicability and comparing their strengths and weaknesses. It will also discuss some of the common integration challenges that users face and take a look at some important software requirements that should be considered before deciding on which embedded vision solution is right for you.

It’s not just black and white anymore – How multi-modal imaging is changing machine vision

Teledyne DALSA

17.10. 11:00-11:25 | 18.10. 14:15-14:40 | German sessions

Speaker: Andreas Lange

The world of imaging has moved well beyond the realm of the monochrome and colour imaging. In the past years imaging was used to generate pass/fail criteria from images collected from measurement and verification applications. But today image data must provide sufficient enough information to understand the root causes and improve yield. Furthermore, image data is now being associated with many invisible product quality attributes.

New techniques from research consortiums and universities continue to identify new relationships between material properties and the electromagnetic spectrum, such that non-destructive or non-invasive techniques can be employed to gather real-time information. However, the additional information desired often requires multiple data streams and synthesis of the raw image. Many of the most basic applications now utilize multiple methods of imaging and this session will explore some of these techniques.

Challenges and trends in microbolometer design

Teledyne DALSA

17.10. 15:45-16:10 | 18.10. 9:00-9:25 | German sessions

Speaker: Uwe Pulsfort

Due to the availability of smaller, more sensitive and cheaper solutions, long-wave infrared technologies are of increasing interest in a very wide variety of application areas. The aim of the session is to illustrate the challenges during the development and manufacturing process of uncooled LWIR detectors, inform you about the next big trend like wafer-level packaging, and highlight different techniques that allow the implementation of specific feature sets to meet the different requirements in industrial applications. Finally, an outlook on future designs and market trends will be given to the audience.

What applications would warrant 155 Megapixel sensors/cameras?

Vieworks

17.10. 16:45-17:10 | 18.10. 9:30-9:55 | German sessions

Speaker: Wojciech Majewski

Since the creation of the first digital sensor/camera in 1975 by Kodak (with only 0.1 megapixels) the resolution of sensors has increased tremendously, while their weight has dropped. There is an almost insatiable need for ever higher resolutions and speed. This need is driven from the Automated Optical Inspection (AOI) systems that are used in electronic, semiconductor and flat panel display (FPD) manufacturing.

AOI systems ensure consistent production quality in high-speed manufacturing processes. The camera with increased resolution monitors devices under inspection for failures and quality.

10 GigE and NBASE-T – Image processing made easy thanks to Cisco

VRmagic

17.10. 16:15-16:40 | 18.10. 14:45-15:10 | German sessions

Speaker: Oliver Menken

PC hardware interfaces have gained popularity in the world of image processing due to the shift from analogue to digital cameras. Whereas the early digital cameras used Firewire interfaces, the current generation uses interfaces such as USB and Gigabit Ethernet, which are available on any standard PC and CoaXPress or CameraLink when high data rates are required.

CoaXPress and CameraLink represent the underlying trend of machine vision applications due to increasing sensor resolutions and speeds. The last few years have seen key players such as Cisco roll-out 10 GigE und NBASE-T interfaces in Ethernet backbones and high-performance Wi-Fi routers. In 2017, the increased acceptance of this standard in the consumer marketplace has seen the arrival of 10GbE Network Attached Storage (NAS) devices, 10GbE integration into Motherboards and even the first Network Interface Controller (NIC) cards for less than 100 USD.

The industrial image processing community benefits from the work of Cisco, Intel and others: The first 10gbE cameras for image processing are available and applications engineers profit from simplified component selection, long cable lengths and low latency times – and all this with low cost standard network infrastructure.

High-speed InGaAs SWIR line arrays & cameras for OCT and machine vision systems

Xenics

17.10. 11:30-11:55 | 18.10. 11:15-11:40 | German sessions

Speaker: Guido Deutz

IIndium Gallium Arsenid (short InGaAs) photo diodes conquer more and more industrial opto-electronic applications in the short wave infrared spectrum from 0.9 to 1.7µm (so called SWIR).

This lecture provides a short historical overview on the material development up to the modern state of the flip-chip detector architecture. Furthermore, practical machine vision and optical coherence tomography examples will be presented based on the newest high speed InGaAs line arrays and cameras.

New generation of laser modules and how you can benefit from them

Z-Laser

17.10. 14:30-14:55 | 18.10. 11:15-11:40 | German sessions

Speaker: Stephan Broche

Lasers have become well-established as light source in 3D measurement applications such as triangulations sensors over the last 20 years. They are an indispensable part of today’s automated production lines. Nowadays, various wavelengths and output powers are available to provide the optimum foundations for the inspection of various materials under different conditions.

All components in a machine vision setup are being confronted with new challenges since the era of Industry 4.0 arrived. Due to new production processes and sophisticated drivers in electronics, the laser modules offer more than just the projection of a pattern. This presentation will show you new developments in the field of laser modules and how you can benefit from them.

Why the F-Mount is obsolete?

Zeiss

17.10. 15:45-16:10 | 18.10. 14:15-14:40 | German sessions

Speaker: Udo Schellenbach

Since the 1980s people have been crying out computerized automatic image processing and analysis. The technology drivers until now have mainly been progresses in computer technology, sensor technology, illumination and the algorithms. The optics used were mostly those that were available from the video market (C-Mount) or the photo market (F-Mount).

Today F-Mount lenses are used in large numbers of industrial applications despite their widely known drawbacks and the fact that better alternatives are available. Are there still reasons to use F-Mount lenses in industrial applications?

High-speed InGaAs SWIR line arrays & cameras for OCT and machine vision systems

Xenics

IIndium Gallium Arsenid (short InGaAs) photo diodes conquer more and more industrial opto-electronic applications in the short wave infrared spectrum from 0.9 to 1.7µm (so called SWIR).

This lecture provides a short historical overview on the material development up to the modern state of the flip-chip detector architecture. Furthermore, practical machine vision and optical coherence tomography examples will be presented based on the newest high speed InGaAs line arrays and cameras.