Mobile Graphics

The increased availability and performance of mobile graphics terminals, including smartphones and tablets with high resolution screens and powerful GPUs, combined with the increased availability of high-speed mobile data connections, is opening the door to a variety of networked graphics applications. In this world, native apps or mobile sites coexist to reach the goal of providing us access to a wealth of multimedia information while we are on the move. As part of the TDM project, we have developed educational material that provides a technical introduction to the mobile graphics world, spanning the hardware-software spectrum, and exploring the state of the art and key advances in specific application domains, including capture and acquisition, real-time high-quality 3D rendering and interactive exploration. Experts from the TDM project have delivered, on the basis of this meterial, a half-day tutorial on Mobile Graphics at the tenth edition of SIGGRAPH Asia , held from the 27th to 30th November in Bangkok, Thailand. The conference, with close to 7,000 attendees from over 60 countries, the event is the region’s largest annual conference and exhibition on computer graphics and interactive techniques. The course was organized by CRS4 in collaboration with experts from UPC (Spain) and KAUST (Saudi Arabia). For more information: https://sa2017.siggraph.org/attendees/courses?view=session&sid=8 In this page, we provide the same material, extending it with additional practical information. This material will be further extended as part of the TDM Summer schools.

Outline

In last decades, worldwide mobile phone subscriptions grew to over seven billion, penetrating 100% of the global population and reaching even the bottom of the economic pyramid. Of these subscriptions, about 3.5 billions have internet access, and the ability to exchange information is continuing to grow exponentially, with multimedia data taking a large share. Hence, it is expected that the future of computing in general, and visual computing in particular, will be dramatically affected by this scenario. In particular, for the scope of this project, and smart city applications in general, personal mobile terminals, thanks to their sensors, can be considered as important sources of information on people and environments, as well as widely available means to deliver (graphical) data. In these pages, we present an overview of the evolution and state-of-the-art of mobile graphics platforms, discuss current graphics APIs and development tools for developing mobile applications, and illustrate current trends in exploiting capture, networking, and display hardware. Detailed examples are provided in two visual computing domains of particular interest to TDM-related applications: exploiting data fusion techniques for metric capture and reconstruction, and exploiting networking and display hardware to visualize and interact with massive models. The material concludes with a view of future challenges.

  • SLIDE SET - Session 0: Introduction [pdf]

Evolution of mobile graphics

In the last +15 years, mobiles have changed to being just an useful tool for emergencies, to be with us through most of our daily tasks. We use them for guiding us while driving, to account how many steps or miles we have covered, to answer our mails, and, of course, to spend some time by playing or watching TV shows. Many companies rely on the ubiquity of mobile devices (and Internet connections through them) to provide their services. We ask for a taxi while listening to cloud-stored music, and get continuous notifications on the transit, weather, shopping offers... Ericsson Mobility Report [Eri16] predicts that global smartphone subscriptions will grow from 7.3B to 9B by 2021, with a notable growth in smartphones, that will double. Moreover, it also predicts the mobile traffic will grow from 1.4GB/month to 8.9GB/month by 2021. Early on, with the advent of mobile devices, games arrived with them. In 1997, the Nokia 6610 sported the Snake game. It become so popular that this year, a new version of the Nokia 3310 was announced with all bells and whistles, with the game Snake as one of its selling points. So throughout the history of smartphones, gaming [FGBAR12] (especially casual gaming [Kul09]) has become one of its desired features, and with gaming, the needs of increasing the graphics power come all along. In the following slide set, we provide an analysis of the evolution of the mobile world from the point of view of the graphics, but also with all the elements that make it possible, such as the GPU, the operative system, and so on... The main areas that will be covered are:

  • Overview of mobile world, and its capillary diffusion;
  • Description of the main characteristics of mobile graphics terminals, with respect to network;
  • Capabilities, graphics and displays, sensors and CPUs;
  • Discussion of current issues and future opportunities.
  • Analysis of the mobile application trends.
  • SLIDE SET - Session 1: Evolution of mobile graphics [pdf]

Mobile graphics application trends

Since the release of the iPhone, smartphones have become mainstream. Most of us own at least one of those, and it is not uncommon to see people sporting two of those. However, despite that, according to the GSM organization, two-thirds of the population own a device ( [A ∗ 17]) there is a lack of variety in vendors, and operative systems. Most of the devices sold either have iOS or Android, accounting for a 95% of the market. Thus, the evolution of applications and graphics in general is tightly linked to the evolution of those two operative systems ( [GR11,Tra12]). In the following slide set, we cover different aspects of mobile graphics, such as:

  • General overview of rendering pipeline for interactive 3D mobile applications;
  • Evolution of 3D graphics applications, from remote rendering solutions to hybrid mobile/remote solutions, exploiting imagebased or model-based methods;
  • Overview of current trends in hardware acceleration employing parallel pipelines, real-time ray tracing and multi-rate approaches:
  • Overview of other visual mobile applications like 3D acquisition and processing, physical simulations, and visual aberration correction

Graphics development for mobile systems

As commented above, only two operative systems dominate the market, with above 95% of the market share: iOS and Android. Apple was first in creating a closed way to install new apps on a mobile, through the Apple Store. The device owners can only install applications from the official store. Moreover, Apple uses a strict approval-first policy that is lengthy and sometimes arbitrary. Although there are other means to install applications in an iPhone or iPad, they are forbidden by Apple and are by no means accessible to most of the population, due to its technical complications. This has permitted Apple to have a fine grain control on which applications appear in the store, and, though it has its own shortcomings, especially for developers, it has some advantages for the users, such as the relative safety of the content. Following the path marked by Apple, Google also provides a store, although the users can easily install applications from other sources, even from other vendors such as Amazon or Samsung, that compete with Google both in selling devices, and in selling apps. Both operative system owners (Apple and Google) have created the tools that facilitate the development of aplications. In the case of iOS, Apple provides an SDK for the development using Objective C. More recently, Apple is shifting to a new language called Swift, and moving the graphics development to a new language, called Metal ( [CP15], both developed by Apple. Google chosed Java instead, for its popularity. Android SDK also provides a set of tools for application development, that is continuously evolving. The graphics language implemented in Android is OpenGL, although new versions of Android are now supporting the evolution of OpenGL: Vulkan. In the following slide set, we cover the following topics:

  • Overview of main operating systems employed in mobile platforms;
  • Description of currently used programming languages and tools and deployment environments;
  • Introduction to currently available 3D APIs.
  • SLIDE SET - Session 3: Graphics development for mobile systems [pdf]

Scalable mobile visualization

GPUs of mobile terminals have evolved considerably. However,  despite being claimed the contrary, mobile GPUs are still not on par with consoles and/or desktop GPUs, mostly because of energy considerations. This makes the use of large scenes or datasets quite difficult. As a result, there is a need in the implementation of specialized algorithms that are able to deal with large models. .There are, in particular, several main challenges when developing graphics applications for mobile devices:

  • Network and memory limitations: The end model might not fit inside the mobile device, which is much more limited than the desktop countepart - moreover, sending large models to mobile devices must take into account the limitations of wireless networks. Mobile graphics systems must thus include techniques to limit bandwidth and memory consumption (smart data representation, compression, adaptive renderers, levels of detail)
  • GPU capabilities: The algorithm we intend to use either requires non available GPU features, especially if the application targets many different devices (e.g., not all GPUs support tessellation). The algorithms must thus be designed to circumvent limitations of GPU capabilities (e.g., using simpler techniques and multi-pass methods).
  • GPU horsepower and performance: Even if the GPU has the capabilities, and the memory to store the data, the algorithm required (e.g. a GPUbased raycasting) might be unable to  with enough performance for the implementation run smoothly. At worst, the altgorithm could be considered so costly that the operative system kills the application because it has become non responsive. The algorithms must thus be designed to ensure performance and responsiveness (approximation methods, multiresolution, levels of details, progressive rendering).

Throughout the last years, there have been many developments tailored to improve the quality of renderings for large datasets, both in terms of quality and in performance. The initial trend was the creation of client-server architectures, where the mobile simply renders an image generated by a high end device [PGGB12]. This architecture has been used both for meshes, point clouds, and volume datasets [LS07]. The arrival of more powerful GPUs, just recently, has allowed the possibility of rendering large models with them, although always performing some sort of simplification[BAMG14a][BGM∗ 12]. This is due to the fact that current mobile GPUs still are 10 to 40 times slower than a desktop GPU, as can be seen in benchmarks that compare both GPUs (such as GFXBench). For high quality illumination, we only find papers that address some parts of the rendering equation, being it shadows [BLM16], or some physically-based rendering effects [LKC14]. In many cases, when implemented in mobile, global illumination approximations rely on a high number of precalculations [KUK15], usually in the form of light maps [SE16]. There are just a few techniques that calculate a certain approximation to the rendering equation on realtime, e.g. ambient occlusion [SV16a]. In the case of volumetric models, it was not until 2011-2012 that rendering them directly on the smartphone was possible, using a standard GPU-based raycasting [BV12]. But even the framerates that one can achieve with the latest device are still low for volumetric datasets [SAK15]. In the following slide set, we cover the following topics:

  • Characteristics of mobile devices with respect to massive models and need for scalable technique with low CPU overhead;
  • Converting complex environments to image-based solutions;
  • Scaling in terms of geometry: comparison of general chunked triangulation/point-cloud methods with constrained techniques exploiting pre-defined compressed image formats and graphics primitives;
  • Improving visual quality with real-time screen-space based global illumination methods for mobile applications.
  • Approaches to support interactive volumetric exploration.

Mobile metric capture and reconstruction

Mobile devices have become increasingly attractive for solving environment sensing problems given their multi-modal acquisition capabilities and their growing processing power, enabling fast digital acquisition and effective information extraction [DL15]. This is particularly of interest for a project dealing with spatial data such as TDM, since the current combination of many sensors in a single device opens the door to multi-model acquisition and data fusion. In particular, by exploiting visual sensors as well as position and orientation sensors, it is possible to recover the shape of complex environments. In the following slide set, we cover the following subjects:

  • Overview of the mobile device sensor and sensor-fusion capabilities available on mobile devices, ranging from commodity smartphones and tablets to new generation spherical panoramic cameras (SPC).
  • Introduction to image-based 3D reconstruction methods running on mobile devices;
  • Mobile metric reconstruction methods using a combination of images and inertial acceleration data with an example implementation of a full acquisition and SfM reconstruction pipeline working in a limited bounding volume;
  • Real-world cases: mobile mapping and reconstruction of indoor scenes, from the limits of the perspective views to the advantages of the wide-FOV panoramic images;
  • Example of application: mobile reconstruction and exploration of indoor structures exploiting omnidirectional images;
  • Motion estimation and dense depth maps generation from small motion with mobile SPC;
  • Future trends: automatic mapping and reconstruction of indoor structures from 360 video sequences.
  • SLIDE SET - Session 5: Mobile metric capture and reconstruction [pdf]

Wrap-up

The material provided here offers a general introduction and summary to the domain of mobile graphics. Some particular works are presented in more details to illustrate case studies of significant methods and applications. Relevant references are the following:

  • capture and reconstruction [PGGS16b, GPG∗ 16, PGG∗ 16, PAG14, PG14];
  • metric reconstruction with mobile devices [GPG∗ 16];
  • presentation of enhanced image representations [AJPG16]
  • image-based exploration of environments [PGGS16a,DGB∗ 14]
  • real-time interactive exploration of massive surfaces [BAMG14b, BGMT13, BGMT13, GMB∗ 12] and volumes [VB12,DGBN∗ 16]
  • real-time global illumination [SV16b]
  • SLIDE SET - Session 6: Closing [pdf]

References

  • [A∗17] GSMA: The mobile economy 2017. URL: http://www.gsma.com/mobileeconomy/[accessed March 17].
  • AJPG16] AGUS M., JASPE VILLANUEVA A., PINTORE G., GOBBETTI E.: PEEP: Perceptually enhanced exploration of pictures. In Proc. VMV (Oct. 2016).
  • [BAMG14a] BALSA M., AGUS M., MARTON F., GOBBETTI E.: Humors: Huge models mobile rendering system. In Proceedings of the 19th International ACM Conference on 3D Web Technologies (2014), ACM, pp. 7–15.
  • [BAMG14b] BALSA RODRIGUEZ M., AGUS M., MARTON F., GOBBETTI E.: HuMoRS: Huge models mobile rendering system. In Proc. Web3D (Aug. 2014), pp. 7–16. 3
  • [BGM∗12] BALSA M., GOBBETTI E., MARTON F., PINTUS R., PINTORE G., TINTI A.: Interactive exploration of gigantic point clouds on mobile devices. In VAST (2012), pp. 57–64.
  • [BGMT13] BALSA RODRÍGUEZ M., GOBBETTI E., MARTON F., TINTI A.: Coarse-grained multiresolution structures for mobile exploration of gigantic surface models. In Proc. SIGGRAPH Asia Symposium on Mobile Graphics and Interactive Applications (November 2013), pp. 4:1– 4:6.
  • [BLM16] BALA S., LOPEZ MENDEZ R.: Efficient soft shadows based on static local cubemap. GPU Pro 7 (2016), 175.
  • [BV12] BALSA M., VÁZQUEZ P. P.: Practical volume rendering in mobile devices. In International Symposium on Visual Computing (2012), Springer, pp. 708–718.
  • [CP15] CAIRA T., PRIETO J.: Learn metal for ios 3D game development.
  • [DGB∗14] DI BENEDETTO M., GANOVELLI F., BALSA RODRIGUEZ M., JASPE VILLANUEVA A., SCOPIGNO R., GOBBETTI E.: ExploreMaps: Efficient construction and ubiquitous exploration of panoramic view graphs of complex 3D environments. Computer Graphics Forum 33, 2 (2014), 459–468.
  • [DGBN∗16] DÍAZ-GARCÍA J., BRUNET P., NAVAZO I., PEREZ F., VÁZQUEZ P.-P.: Adaptive transfer functions. The Visual Computer (2016), 1–11.
  • [DL15] DEV K., LAU M.: Democratizing digital content creation using mobile devices with inbuilt sensors. Computer Graphics and Applications 35, 1 (Jan 2015), 84–94.
  • [Eri16] ERICSSON: Ericsson mobile report, 2016, 2016. https://www.ericsson.com/res/docs/2016/ericsson-mobility-report2016.pdf, [accessed March 17].
  • [FGBAR12] FEIJOO C., GÓMEZ-BARROSO J.-L., AGUADO J.-M., RAMOS S.: Mobile gaming: Industry challenges and policy implications. Telecommunications Policy 36, 3 (2012), 212–221.
  • [GMB∗12] GOBBETTI E., MARTON F., BALSA RODRIGUEZ M., GANOVELLI F., DI BENEDETTO M.: Adaptive quad patches: an adaptive regular structure for web distribution and adaptive rendering of 3d models. In Proc. Web3D (Aug. 2012), pp. 9–16.
  • [GPG∗16] GARRO V., PINTORE G., GANOVELLI F., GOBBETTI E., SCOPIGNO R.: Fast metric acquisition with mobile devices. In Proc. VMV (Oct. 2016).
  • [GR11] GOADRICH M. H., ROGERS M. P.: Smart smartphone development: ios versus android. In Proceedings of the 42nd ACM technical symposium on Computer science education (2011), ACM, pp. 607–612.
  • [KUK15] KÁN P., UNTERGUGGENBERGER J., KAUFMANN H.: High-quality consistent illumination in mobile augmented reality by radiance convolution on the gpu. In International Symposium on Visual Computing (2015), Springer, pp. 574–585.
  • [Kul09] KULTIMA A.: Casual game design values. In Proceedings of the 13th international MindTrek conference: Everyday life in the ubiquitous era (2009), ACM, pp. 58–65.
  • [LKC14] LEE W.-S., KIM S.-D., CHIN S.: Subsurface scattering-based object rendering techniques for real-time smartphone games. Mathematical Problems in Engineering (2014).
  • [LS07] LAMBERTI F., SANNA A.: A streaming-based solution for remote visualization of 3d graphics on mobile devices. IEEE transactions on visualization and computer graphics 13, 2 (2007).
  • [PAG14] PINTORE G., AGUS M., GOBBETTI E.: Interactive mapping of indoor building structures through mobile devices. In Proc. 3DV Workshop on 3D Computer Vision in the Built Environment (Dec. 2014).
  • [PG14] PINTORE G., GOBBETTI E.: Effective mobile mapping of multiroom indoor structures. The Visual Computer 30, 6–8 (2014), 707–716.
  • [PGG∗16] PINTORE G., GARRO V., GANOVELLI F., GOBBETTI E., AGUS M.: Omnidirectional image capture on mobile devices for fast automatic generation of 2.5D indoor maps. In Proc. WACV (Feb. 2016), pp. 1–9.
  • [PGGB12] PINTORE G., GOBBETTI E., GANOVELLI F., BRIVIO P.: 3dnsite: A networked interactive 3d visualization system to simplify location awareness in crisis management. In Proceedings of the 17th International Conference on 3D Web Technology (2012), ACM, pp. 59–67.
  • [PGGS16a] PINTORE G., GANOVELLI F., GOBBETTI E., SCOPIGNO R.: Mobile mapping and visualization of indoor structures to simplify scene understanding and location awareness. In Proc. ACVR (Oct. 2016).
  • [PGGS16b] PINTORE G., GANOVELLI F., GOBBETTI E., SCOPIGNO R.: Mobile reconstruction and exploration of indoor structures exploiting omnidirectional images. In Proc. SIGGRAPH Asia Symposium on Mobile Graphics and Interactive Applications (Dec. 2016), pp. 1:1–1:4.
  • [SAK15] SCHIEWE A., ANSTOOTS M., KRÜGER J.: State of the art in mobile volume rendering on ios devices. In Eurographics Conference on Visualization (EuroVis)-Short Papers. The Eurographics Association (2015).
  • [SE16] SMITH A. V., EINIG M.: Physically based deferred shading on mobile. GPU Pro 7: Advanced Rendering Techniques (2016), 187.
  • [SV16a] SUNET M., VAZQUEZ P.-P.: Optimized screen-space ambient occlusion in mobile devices. In Proceedings of the 21st International Conference on Web3D Technology (2016), ACM, pp. 127–135.
  • [SV16b] SUNET M., VAZQUEZ P.-P.: Optimized screen-space ambient occlusion in mobile devices. In Proc. Web3D (2016), ACM, pp. 127– 135.
  • [Tra12] TRACY K. W.: Mobile application development experiences on apple ios and android os. IEEE Potentials 31, 4 (2012), 30–34.
  • [VB12] VÁZQUEZ P.-P., BALSA M.: Practical volume rendering in mobile devices. In Proc. International Symposium on Visual Computing, vol. 7431 of Lecture Notes in Computer Science (LNCS). Springer Verlag, 2012, pp. 708–718.