In order to make changes to the room layout, it is important that the application understands which types of objects are found where in the scene. This information can be used to determine the type of room, find spaces where new objects can be added as well as to remove objects from the image. The latter also requires to know the exact boundaries of the object.
https://www.oreilly.com/ideas/introducing-capsule-networks?cmp=tw-data-na-article-ainy18_thea
A basic approach to understand a scene visually is to classify it based on the objects present. Classification alone just assign a global label to the image. In addition, the areas that contributed to the classification can be localized. Object detection find instances of specific object classes, and localizes them, typically with a bounding box. Semantic segmentation obtains a dense labelling of the image, providing a class label for every pixel. However, neighboring or overlapping objects belonging to the same class are merged. Instance segmentation also provides pixel-based labels, but separates the objects, and provides a label for each object class.
In ATLANTIS, we start from a panorama (360° image) taken by the user in order to capture as much as possible from a single viewpoint. We thus adapt instance segmentation methods to work on panoramic representations (such as equirectangular images) in order to avoid performing the analysis separately for views of the scene.
Many approaches only segment "foreground" object classes, while for some ATLANTIS use cases also background ("stuff") classes such as walls are ceiling are relevant. This kind of segmentation is typically referred to as panoptic segmentation
Created with Mobirise creator