Thomas Marrinan | University of St. Thomas, Minnesota (original) (raw)
Papers by Thomas Marrinan
Large-scale scientific simulations typically output massive amounts of data that must be later re... more Large-scale scientific simulations typically output massive amounts of data that must be later read in for post-hoc visualization and analysis. With codes simulating complex phenomena at ever-increasing fidelity, writing data to disk during this traditional high-performance computing workflow has become a significant bottleneck. In situ workflows offer a solution to this bottleneck, whereby data is simultaneously produced and analyzed without involving disk storage. In situ analysis can increase efficiency for domain scientists who are exploring a data set or fine-tuning visualization and analysis parameters. Our work seeks to enable researchers to easily create and interactively analyze large-scale simulations through the use of Jupyter Notebooks without requiring application developers to explicitly integrate in situ libraries.
When dealing with extremely large data sets or computationally expensive rendering pipelines, loc... more When dealing with extremely large data sets or computationally expensive rendering pipelines, local workstations may not be able to render the full data set or maintain interactive frame rates. In these cases, high-performance graphics clusters can be leveraged for distributed rendering. However, this traditionally has removed real-time feedback from the visualization system. In order to harness the power of distributed rendering and the real-time nature of local rendering, we developed PxStream — a streaming framework to transfer dynamically rendered images from high-performance graphics clusters to remote machines in real-time. PxStream clients can range from a standard computer with single monitor to a cluster-driven tiled display wall. Additionally, the PxStream server supports multiple concurrent endpoints to allow collaborators at different physical locations to simultaneously view the image stream. Initial tests demonstrate that PxStream can simultaneously stream 66 megapixel images to two locations at nearly 50 frames per second. Index Terms: Human-centered computing—Visualization; Human-centered computing—Collaborative and social computing; Theory of computation—Distributed algorithms.
Journal of Computational Science, Feb 1, 2019
Performing analysis or generating visualizations concurrently with high performance simulations c... more Performing analysis or generating visualizations concurrently with high performance simulations can yield great benefits compared to post-processing data. Writing and reading large volumes of data can be reduced or eliminated, thereby producing an I/O cost savings. One such method for concurrent simulation and analysis is in transit-streaming data from the resource running the simulation to a separate resource running the analysis. In transit analysis can be beneficial since computational resources may not have certain resources needed for visualization and analysis (e.g. GPUs) and to reduce the impact of performing analysis tasks to the run time of the simulation. When sending and receiving data in transit, data redistribution mechanisms are needed in order to support heterogeneous data layouts that may be required by the simulation and analysis applications. The work described in this paper compares two mechanisms for on-the-fly data redistribution when streaming data in parallel between two distributed memory applications. Our results show that it is often more advantageous to stream data in the same layout as the sender and redistribute data amongst processes on the receiving end than to stream data in the final layout needed by the receiver.
IEEE Transactions on Visualization and Computer Graphics, May 1, 2021
Surround-view panoramic images and videos have become a popular form of media for interactive vie... more Surround-view panoramic images and videos have become a popular form of media for interactive viewing on mobile devices and virtual reality headsets. Viewing such media provides a sense of immersion by allowing users to control their view direction and experience an entire environment. When using a virtual reality headset, the level of immersion can be improved by leveraging stereoscopic capabilities. Stereoscopic images are generated in pairs, one for the left eye and one for the right eye, and result in providing an important depth cue for the human visual system. For computer generated imagery, rendering proper stereo pairs is well known for a fixed view. However, it is much more difficult to create omnidirectional stereo pairs for a surround-view projection that work well when looking in any direction. One major drawback of traditional omnidirectional stereo images is that they suffer from binocular misalignment in the peripheral vision as a user's view direction approaches the zenith / nadir (north / south pole) of the projection sphere. This paper presents a real-time geometry-based approach for omnidirectional stereo rendering that fits into the standard rendering pipeline. Our approach includes tunable parameters that enable pole merging - a reduction in the stereo effect near the poles that can minimize binocular misalignment. Results from a user study indicate that pole merging reduces visual fatigue and discomfort associated with binocular misalignment without inhibiting depth perception.
Proceedings of the 2016 ACM International Conference on Interactive Surfaces and Spaces
Real world group-to-group collaboration often occurs between partially distributed interdisciplin... more Real world group-to-group collaboration often occurs between partially distributed interdisciplinary teams, with each discipline working in a unique environment suited for its needs. Groupware must be flexible so that it can be incorporated into a variety of workspaces in order to successfully facilitate this type of mixed presence collaboration. We have developed two new techniques for sharing and synchronizing multiuser applications between heterogeneous large-scale shared displays. The first new technique partitions displays into a perfectly mirrored public space and a local private space. The second new technique enables usercontrolled partial synchronization, where different attributes of an application can be synchronized or controlled independently. This paper presents two main contributions of our work: 1) identifying deficiencies in current groupware for interacting with data during mixed presence collaboration, and 2) developing two multiuser data synchronization techniques to address these deficiencies and extend current collaborative infrastructure for large-scale shared displays.
2019 IEEE 9th Symposium on Large Data Analysis and Visualization (LDAV), 2019
Ultra-high-resolution visualizations of large-scale data sets are often rendered using a remotely... more Ultra-high-resolution visualizations of large-scale data sets are often rendered using a remotely located graphics cluster that does not have a connected display. In such instances, rendered images must either be streamed over a network for live viewing, or saved to disk for later viewing. This process introduces the additional overhead associated with transferring data off of the GPU device. We present early work on real-time compression of rendered visualizations that aims to reduce both the device-to-host data transfer time and the I/O time for streaming or writing to disk. By using OpenGL / CUDA interop, images are compressed on the GPU prior to transferring the data to main memory. Although there is a computation cost to performing compression, our results show that this overhead is more than offset by the reduced data transfer and I/O times.
2021 IEEE 11th Symposium on Large Data Analysis and Visualization (LDAV), 2021
Large-scale scientific simulations typically output massive amounts of data that must be later re... more Large-scale scientific simulations typically output massive amounts of data that must be later read in for post-hoc visualization and analysis. With codes simulating complex phenomena at ever-increasing fidelity, writing data to disk during this traditional high-performance computing workflow has become a significant bottleneck. In situ workflows offer a solution to this bottleneck, whereby data is simultaneously produced and analyzed without involving disk storage. In situ analysis can increase efficiency for domain scientists who are exploring a data set or fine-tuning visualization and analysis parameters. Our work seeks to enable researchers to easily create and interactively analyze large-scale simulations through the use of Jupyter Notebooks without requiring application developers to explicitly integrate in situ libraries.
<strong>Major changes</strong> added support for warping and edge blending (thanks @v... more <strong>Major changes</strong> added support for warping and edge blending (thanks @voidcycles) added support for XCode (thanks @koosha94) modules now do not need to be registered with a hub repository. It is possible to tell omegalib omegalib to use any github repository containing an omegalib module. added new OpenGL3 GPU API (GpuProgram, GpuBuffer, Uniform, etc) added support for using external python distributions on Windows <strong>Fixes</strong> fixed module dependency solver fixed cmake files for including omegalib into external applications several fixes to opengl core profile support improved packaging scripts including support for packaging installers on OSX Full changelog: https://github.com/uic-evl/omegalib/compare/v13.1...v15.0
2019 IEEE 9th Symposium on Large Data Analysis and Visualization (LDAV), 2019
When dealing with extremely large data sets or computationally expensive rendering pipelines, loc... more When dealing with extremely large data sets or computationally expensive rendering pipelines, local workstations may not be able to render the full data set or maintain interactive frame rates. In these cases, high-performance graphics clusters can be leveraged for distributed rendering. However, this traditionally has removed real-time feedback from the visualization system. In order to harness the power of distributed rendering and the real-time nature of local rendering, we developed PxStream — a streaming framework to transfer dynamically rendered images from high-performance graphics clusters to remote machines in real-time. PxStream clients can range from a standard computer with single monitor to a cluster-driven tiled display wall. Additionally, the PxStream server supports multiple concurrent endpoints to allow collaborators at different physical locations to simultaneously view the image stream. Initial tests demonstrate that PxStream can simultaneously stream 66 megapixel images to two locations at nearly 50 frames per second. Index Terms: Human-centered computing—Visualization; Human-centered computing—Collaborative and social computing; Theory of computation—Distributed algorithms.
2018 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW), 2018
Extreme scale analytics often requires distributed memory algorithms in order to process the volu... more Extreme scale analytics often requires distributed memory algorithms in order to process the volume of data output by high performance simulations. Traditionally, these analysis routines post-process data saved to disk after a simulation has completed. However, concurrently executing both simulation and analysis can yield great benefits – reduce or eliminate disk I/O, increase output frequency to improve fidelity, and ultimately shorten time-to-discovery. One such method for concurrent simulation and analysis is in transit – transferring data from the resource running the simulation to a separate resource running the analysis. In transit analysis can be beneficial since computational resources may not have certain resources needed for analysis (e.g. GPUs) and to reduce the impact of performing analysis tasks to the run time of the simulation. The work described in this paper compares three techniques for transferring data between distributed memory applications: 1) writing data to and reading data from a parallel file system, 2) copying data into and out of a network-accessed shared memory pool, and 3) streaming data in parallel from the processes in the simulation application to the processes in the analysis application. Our results show that using a shared memory pool and streaming data over high-bandwidth networks can both drastically increase I/O speeds and lead to quicker analysis.
2017 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW), 2017
High-performance distributed memory applications often load or receive data in a format that diff... more High-performance distributed memory applications often load or receive data in a format that differs from what the application uses. One such difference arises from how the application distributes data for parallel processing. Data must be redistributed from how it was laid out by the producer to how the application needs the data to be laid out amongst its processes. In this paper, we present a large-scale distributed memory library, provided to developers in an easily integrated API, for automating data redistribution in MPI enabled applications. We then present the results of two scientific computing use cases to evaluate our library. The first use case highlights how dynamic data redistribution can greatly reduce load time when reading three-dimensional medical imaging data from disk. The second use case highlights how dynamic data redistribution can facilitate in-transit analysis of computational fluid dynamics, which results in smaller data output size and faster time-to-discovery.
He also provided inspiration and support in my studies to help discover my true passion in Comput... more He also provided inspiration and support in my studies to help discover my true passion in Computer Science. A special thanks goes to Dr. Luc Renambot, whom I collaborated closely with on both research and development. I would also like to thank the rest of my dissertation committee for providing insight and feedback from the time I was developing a research question until the time it culminated with analyzing the results of evaluating studies. Additionally, I must give a big thank you to the entire Electronic Visualization Laboratory. The students, faculty, and staff have been incredibly supportive. I feel lucky to have worked with each one of you, and I will always remain a member of the EVL family. TM iii CONTRIBUTION OF AUTHORS Chapter 1 introduces an active area of research to which my dissertation makes its contributions. Chapter 2 provides a background about the technologies I used during my research, and includes portions of a published manuscript (Marrinan et al., SAGE2: A New Approach for Data Intensive Collaboration Using Scalable Resolution Shared Displays, 2014) for which I was the primary author. Chapter 3 is a literature review of related works that frame my dissertation within the scope of research in the field of computer supported cooperative work. Chapter 4 details my research methods for implementation and evaluation of a new technology, as well as outlines the main questions my dissertation sought to answer. Chapter 5 presents the results of user studies I conducted and provides insight into answering my research questions. Chapter 6 concludes my thesis by summarizing the knowledge that was gained and providing future area to expand upon my research. My advisor and chair, Andrew Johnson, along with the rest of my committee contributed valuable feedback during the editing of this document. iv PREFACE As a graduate student at the Electronic Visualization Laboratory, my research interests include visualization, human-computer interaction, and computer-supported cooperative work. The main research topic I aimed to contribute toward is how to improve collaboration across distance, especially when the technology is heterogeneous at various locations. Video and teleconferencing systems have been widely adopted in research and industry for communication between individuals or groups at various locations. The next generation of communication systems will enable real-time data-conferencing, where applications and their respective data are shared along with audio and video. Although some data-conferencing abilities have started being integrated into existing software, they are generally still in the early stages and do not allow for real-time collaboration on unrestricted data types. v
Proceedings of the 2016 ACM International Conference on Interactive Surfaces and Spaces, 2016
Classic visual analysis relies on a single medium for displaying and interacting with data. Large... more Classic visual analysis relies on a single medium for displaying and interacting with data. Large-scale tiled display walls, virtual reality using head-mounted displays or CAVE systems, and collaborative touch screens have all been utilized for data exploration and analysis. We present our initial findings of combining numerous display environments and input modalities to create an interactive multi-modal display space that enables researchers to leverage various pieces of technology that will best suit specific sub-tasks. Our main contributions are 1) the deployment of an input server that interfaces with a wide array of interaction devices to create a single uniform stream of data usable by custom visual applications, and 2) three realworld use cases of leveraging multiple display environments in conjunction with one another to enhance scientific discovery and data dissemination.
Proceedings of the 6th ACM International Symposium on Pervasive Displays, 2017
Scheduling conferences is a common task in both research and industry, which requires relatively ... more Scheduling conferences is a common task in both research and industry, which requires relatively small groups to collaborate and negotiate in order to solve an often-large logistical problem with many nuances. For large conferences, the process can take days and it is traditionally a manual procedure performed using physical tools such as whiteboards and sticky notes. We present the design and implementation of StickySchedule, a multiuser application for use on interactive large-scale shared displays to better enable groups to organize large conference-scheduling data. To evaluate our tool, we present observations from novice users, and authentic use cases with expert feedback from organizers who are heavily involved in large conference scheduling. The main contributions of our work are documenting the collaborative and competitive aspects of conference scheduling, creating a tool that incorporates successful features and addresses identified issues with prior works, and verifying the usefulness of our tool by observing and discussing a variety of use cases, in both collocated and remote-distributed settings.
Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing, 2017
Mixed presence collaboration involves remote collaboration between multiple collocated groups. Th... more Mixed presence collaboration involves remote collaboration between multiple collocated groups. This paper presents the design and results of a user study that focused on mixed presence collaboration using large-scale tiled display walls. The research was conducted in order to compare data synchronization schemes for multiuser visualization applications. Our study compared three techniques for sharing data between display spaces with varying constraints and affordances. The results provide empirical evidence that using data sharing techniques with continuous synchronization between the sites lead to improved collaboration for a search and analysis task between remotely located groups. We have also identified aspects of synchronized sessions that result in increased remote collaborator awareness and parallel task coordination. It is believed that this research will lead to better utilization of large-scale tiled display walls for distributed group work.
IEEE Transactions on Visualization and Computer Graphics, 2021
Surround-view panoramic images and videos have become a popular form of media for interactive vie... more Surround-view panoramic images and videos have become a popular form of media for interactive viewing on mobile devices and virtual reality headsets. Viewing such media provides a sense of immersion by allowing users to control their view direction and experience an entire environment. When using a virtual reality headset, the level of immersion can be improved by leveraging stereoscopic capabilities. Stereoscopic images are generated in pairs, one for the left eye and one for the right eye, and result in providing an important depth cue for the human visual system. For computer generated imagery, rendering proper stereo pairs is well known for a fixed view. However, it is much more difficult to create omnidirectional stereo pairs for a surround-view projection that work well when looking in any direction. One major drawback of traditional omnidirectional stereo images is that they suffer from binocular misalignment in the peripheral vision as a user's view direction approaches the zenith / nadir (north / south pole) of the projection sphere. This paper presents a real-time geometry-based approach for omnidirectional stereo rendering that fits into the standard rendering pipeline. Our approach includes tunable parameters that enable pole merging - a reduction in the stereo effect near the poles that can minimize binocular misalignment. Results from a user study indicate that pole merging reduces visual fatigue and discomfort associated with binocular misalignment without inhibiting depth perception.
Journal of Computational Science, 2019
Performing analysis or generating visualizations concurrently with high performance simulations c... more Performing analysis or generating visualizations concurrently with high performance simulations can yield great benefits compared to post-processing data. Writing and reading large volumes of data can be reduced or eliminated, thereby producing an I/O cost savings. One such method for concurrent simulation and analysis is in transit-streaming data from the resource running the simulation to a separate resource running the analysis. In transit analysis can be beneficial since computational resources may not have certain resources needed for visualization and analysis (e.g. GPUs) and to reduce the impact of performing analysis tasks to the run time of the simulation. When sending and receiving data in transit, data redistribution mechanisms are needed in order to support heterogeneous data layouts that may be required by the simulation and analysis applications. The work described in this paper compares two mechanisms for on-the-fly data redistribution when streaming data in parallel between two distributed memory applications. Our results show that it is often more advantageous to stream data in the same layout as the sender and redistribute data amongst processes on the receiving end than to stream data in the final layout needed by the receiver.
Future Generation Computer Systems, 2016
In this paper, we present SAGE2, a software framework that enables local and remote collaboration... more In this paper, we present SAGE2, a software framework that enables local and remote collaboration on Scalable Resolution Display Environments (SRDE). An SRDE can be any configuration of displays, ranging from a single monitor to a wall of tiled flat-panel displays. SAGE2 creates a seamless ultra-high resolution desktop across the SRDE. Users can wirelessly connect to the SRDE with their own devices in order to interact with the system. Many users can simultaneously utilize a drag-and-drop interface to transfer local documents and show them on the SRDE, use a mouse pointer and keyboard to interact with existing content that is on the SRDE and share their screen so that it is viewable to all. SAGE2 can be used in many configurations and is able to support many communities working with various types of media and high-resolution content, from research meetings to creative session to education. SAGE2 is browser-based, utilizing a web server to host content, WebSockets for message passing and HTML with JavaScript for rendering and interaction. Recent web developments, with the emergence of HTML5, have allowed browsers to use advanced rendering techniques without requiring plug-ins (canvas drawing, WebGL 3D rendering, native video player, etc.). One major benefit of browser-based software is that there are no installation requirements for users and it is inherently cross-platform. A user simply needs a web browser on the device he/she wishes to use as an interaction tool for the SRDE. This lowers considerably the barrier of entry to engage in meaningful collaboration sessions.
Computer Aided Chemical Engineering, 2012
In this paper we present a novel technique for 3D micro capillary bed model reconstruction and co... more In this paper we present a novel technique for 3D micro capillary bed model reconstruction and computational fluid-dynamics (CFD) calculation to simulate morphological and blood perfusion parameters. Major arterial and venous cerebral blood vessels were reconstructed from scanning electron microscope (SEM) images and vessels whose diameters are beyond the resolution of modern imaging techniques were grown from this base structure using our novel directed interactive growth algorithm (DIGA). 3D voronoi networks were used to represent the microvasculature capillary network that joins arterial vessels to adjacent draining veins. The resulting network is morphologically accurate to in vivo measurements of the functional unit with accurate measurements of vessel density (3.17%) and surface area to tissue volume ratio (5.84%). Perfusion patterns of supply to the functional unit and systemic pressure drops match those expected in living tissue and indicate the model is a good candidate for exploring the hemodynamic phenomenon of autoregulation.
Large-scale scientific simulations typically output massive amounts of data that must be later re... more Large-scale scientific simulations typically output massive amounts of data that must be later read in for post-hoc visualization and analysis. With codes simulating complex phenomena at ever-increasing fidelity, writing data to disk during this traditional high-performance computing workflow has become a significant bottleneck. In situ workflows offer a solution to this bottleneck, whereby data is simultaneously produced and analyzed without involving disk storage. In situ analysis can increase efficiency for domain scientists who are exploring a data set or fine-tuning visualization and analysis parameters. Our work seeks to enable researchers to easily create and interactively analyze large-scale simulations through the use of Jupyter Notebooks without requiring application developers to explicitly integrate in situ libraries.
When dealing with extremely large data sets or computationally expensive rendering pipelines, loc... more When dealing with extremely large data sets or computationally expensive rendering pipelines, local workstations may not be able to render the full data set or maintain interactive frame rates. In these cases, high-performance graphics clusters can be leveraged for distributed rendering. However, this traditionally has removed real-time feedback from the visualization system. In order to harness the power of distributed rendering and the real-time nature of local rendering, we developed PxStream — a streaming framework to transfer dynamically rendered images from high-performance graphics clusters to remote machines in real-time. PxStream clients can range from a standard computer with single monitor to a cluster-driven tiled display wall. Additionally, the PxStream server supports multiple concurrent endpoints to allow collaborators at different physical locations to simultaneously view the image stream. Initial tests demonstrate that PxStream can simultaneously stream 66 megapixel images to two locations at nearly 50 frames per second. Index Terms: Human-centered computing—Visualization; Human-centered computing—Collaborative and social computing; Theory of computation—Distributed algorithms.
Journal of Computational Science, Feb 1, 2019
Performing analysis or generating visualizations concurrently with high performance simulations c... more Performing analysis or generating visualizations concurrently with high performance simulations can yield great benefits compared to post-processing data. Writing and reading large volumes of data can be reduced or eliminated, thereby producing an I/O cost savings. One such method for concurrent simulation and analysis is in transit-streaming data from the resource running the simulation to a separate resource running the analysis. In transit analysis can be beneficial since computational resources may not have certain resources needed for visualization and analysis (e.g. GPUs) and to reduce the impact of performing analysis tasks to the run time of the simulation. When sending and receiving data in transit, data redistribution mechanisms are needed in order to support heterogeneous data layouts that may be required by the simulation and analysis applications. The work described in this paper compares two mechanisms for on-the-fly data redistribution when streaming data in parallel between two distributed memory applications. Our results show that it is often more advantageous to stream data in the same layout as the sender and redistribute data amongst processes on the receiving end than to stream data in the final layout needed by the receiver.
IEEE Transactions on Visualization and Computer Graphics, May 1, 2021
Surround-view panoramic images and videos have become a popular form of media for interactive vie... more Surround-view panoramic images and videos have become a popular form of media for interactive viewing on mobile devices and virtual reality headsets. Viewing such media provides a sense of immersion by allowing users to control their view direction and experience an entire environment. When using a virtual reality headset, the level of immersion can be improved by leveraging stereoscopic capabilities. Stereoscopic images are generated in pairs, one for the left eye and one for the right eye, and result in providing an important depth cue for the human visual system. For computer generated imagery, rendering proper stereo pairs is well known for a fixed view. However, it is much more difficult to create omnidirectional stereo pairs for a surround-view projection that work well when looking in any direction. One major drawback of traditional omnidirectional stereo images is that they suffer from binocular misalignment in the peripheral vision as a user's view direction approaches the zenith / nadir (north / south pole) of the projection sphere. This paper presents a real-time geometry-based approach for omnidirectional stereo rendering that fits into the standard rendering pipeline. Our approach includes tunable parameters that enable pole merging - a reduction in the stereo effect near the poles that can minimize binocular misalignment. Results from a user study indicate that pole merging reduces visual fatigue and discomfort associated with binocular misalignment without inhibiting depth perception.
Proceedings of the 2016 ACM International Conference on Interactive Surfaces and Spaces
Real world group-to-group collaboration often occurs between partially distributed interdisciplin... more Real world group-to-group collaboration often occurs between partially distributed interdisciplinary teams, with each discipline working in a unique environment suited for its needs. Groupware must be flexible so that it can be incorporated into a variety of workspaces in order to successfully facilitate this type of mixed presence collaboration. We have developed two new techniques for sharing and synchronizing multiuser applications between heterogeneous large-scale shared displays. The first new technique partitions displays into a perfectly mirrored public space and a local private space. The second new technique enables usercontrolled partial synchronization, where different attributes of an application can be synchronized or controlled independently. This paper presents two main contributions of our work: 1) identifying deficiencies in current groupware for interacting with data during mixed presence collaboration, and 2) developing two multiuser data synchronization techniques to address these deficiencies and extend current collaborative infrastructure for large-scale shared displays.
2019 IEEE 9th Symposium on Large Data Analysis and Visualization (LDAV), 2019
Ultra-high-resolution visualizations of large-scale data sets are often rendered using a remotely... more Ultra-high-resolution visualizations of large-scale data sets are often rendered using a remotely located graphics cluster that does not have a connected display. In such instances, rendered images must either be streamed over a network for live viewing, or saved to disk for later viewing. This process introduces the additional overhead associated with transferring data off of the GPU device. We present early work on real-time compression of rendered visualizations that aims to reduce both the device-to-host data transfer time and the I/O time for streaming or writing to disk. By using OpenGL / CUDA interop, images are compressed on the GPU prior to transferring the data to main memory. Although there is a computation cost to performing compression, our results show that this overhead is more than offset by the reduced data transfer and I/O times.
2021 IEEE 11th Symposium on Large Data Analysis and Visualization (LDAV), 2021
Large-scale scientific simulations typically output massive amounts of data that must be later re... more Large-scale scientific simulations typically output massive amounts of data that must be later read in for post-hoc visualization and analysis. With codes simulating complex phenomena at ever-increasing fidelity, writing data to disk during this traditional high-performance computing workflow has become a significant bottleneck. In situ workflows offer a solution to this bottleneck, whereby data is simultaneously produced and analyzed without involving disk storage. In situ analysis can increase efficiency for domain scientists who are exploring a data set or fine-tuning visualization and analysis parameters. Our work seeks to enable researchers to easily create and interactively analyze large-scale simulations through the use of Jupyter Notebooks without requiring application developers to explicitly integrate in situ libraries.
<strong>Major changes</strong> added support for warping and edge blending (thanks @v... more <strong>Major changes</strong> added support for warping and edge blending (thanks @voidcycles) added support for XCode (thanks @koosha94) modules now do not need to be registered with a hub repository. It is possible to tell omegalib omegalib to use any github repository containing an omegalib module. added new OpenGL3 GPU API (GpuProgram, GpuBuffer, Uniform, etc) added support for using external python distributions on Windows <strong>Fixes</strong> fixed module dependency solver fixed cmake files for including omegalib into external applications several fixes to opengl core profile support improved packaging scripts including support for packaging installers on OSX Full changelog: https://github.com/uic-evl/omegalib/compare/v13.1...v15.0
2019 IEEE 9th Symposium on Large Data Analysis and Visualization (LDAV), 2019
When dealing with extremely large data sets or computationally expensive rendering pipelines, loc... more When dealing with extremely large data sets or computationally expensive rendering pipelines, local workstations may not be able to render the full data set or maintain interactive frame rates. In these cases, high-performance graphics clusters can be leveraged for distributed rendering. However, this traditionally has removed real-time feedback from the visualization system. In order to harness the power of distributed rendering and the real-time nature of local rendering, we developed PxStream — a streaming framework to transfer dynamically rendered images from high-performance graphics clusters to remote machines in real-time. PxStream clients can range from a standard computer with single monitor to a cluster-driven tiled display wall. Additionally, the PxStream server supports multiple concurrent endpoints to allow collaborators at different physical locations to simultaneously view the image stream. Initial tests demonstrate that PxStream can simultaneously stream 66 megapixel images to two locations at nearly 50 frames per second. Index Terms: Human-centered computing—Visualization; Human-centered computing—Collaborative and social computing; Theory of computation—Distributed algorithms.
2018 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW), 2018
Extreme scale analytics often requires distributed memory algorithms in order to process the volu... more Extreme scale analytics often requires distributed memory algorithms in order to process the volume of data output by high performance simulations. Traditionally, these analysis routines post-process data saved to disk after a simulation has completed. However, concurrently executing both simulation and analysis can yield great benefits – reduce or eliminate disk I/O, increase output frequency to improve fidelity, and ultimately shorten time-to-discovery. One such method for concurrent simulation and analysis is in transit – transferring data from the resource running the simulation to a separate resource running the analysis. In transit analysis can be beneficial since computational resources may not have certain resources needed for analysis (e.g. GPUs) and to reduce the impact of performing analysis tasks to the run time of the simulation. The work described in this paper compares three techniques for transferring data between distributed memory applications: 1) writing data to and reading data from a parallel file system, 2) copying data into and out of a network-accessed shared memory pool, and 3) streaming data in parallel from the processes in the simulation application to the processes in the analysis application. Our results show that using a shared memory pool and streaming data over high-bandwidth networks can both drastically increase I/O speeds and lead to quicker analysis.
2017 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW), 2017
High-performance distributed memory applications often load or receive data in a format that diff... more High-performance distributed memory applications often load or receive data in a format that differs from what the application uses. One such difference arises from how the application distributes data for parallel processing. Data must be redistributed from how it was laid out by the producer to how the application needs the data to be laid out amongst its processes. In this paper, we present a large-scale distributed memory library, provided to developers in an easily integrated API, for automating data redistribution in MPI enabled applications. We then present the results of two scientific computing use cases to evaluate our library. The first use case highlights how dynamic data redistribution can greatly reduce load time when reading three-dimensional medical imaging data from disk. The second use case highlights how dynamic data redistribution can facilitate in-transit analysis of computational fluid dynamics, which results in smaller data output size and faster time-to-discovery.
He also provided inspiration and support in my studies to help discover my true passion in Comput... more He also provided inspiration and support in my studies to help discover my true passion in Computer Science. A special thanks goes to Dr. Luc Renambot, whom I collaborated closely with on both research and development. I would also like to thank the rest of my dissertation committee for providing insight and feedback from the time I was developing a research question until the time it culminated with analyzing the results of evaluating studies. Additionally, I must give a big thank you to the entire Electronic Visualization Laboratory. The students, faculty, and staff have been incredibly supportive. I feel lucky to have worked with each one of you, and I will always remain a member of the EVL family. TM iii CONTRIBUTION OF AUTHORS Chapter 1 introduces an active area of research to which my dissertation makes its contributions. Chapter 2 provides a background about the technologies I used during my research, and includes portions of a published manuscript (Marrinan et al., SAGE2: A New Approach for Data Intensive Collaboration Using Scalable Resolution Shared Displays, 2014) for which I was the primary author. Chapter 3 is a literature review of related works that frame my dissertation within the scope of research in the field of computer supported cooperative work. Chapter 4 details my research methods for implementation and evaluation of a new technology, as well as outlines the main questions my dissertation sought to answer. Chapter 5 presents the results of user studies I conducted and provides insight into answering my research questions. Chapter 6 concludes my thesis by summarizing the knowledge that was gained and providing future area to expand upon my research. My advisor and chair, Andrew Johnson, along with the rest of my committee contributed valuable feedback during the editing of this document. iv PREFACE As a graduate student at the Electronic Visualization Laboratory, my research interests include visualization, human-computer interaction, and computer-supported cooperative work. The main research topic I aimed to contribute toward is how to improve collaboration across distance, especially when the technology is heterogeneous at various locations. Video and teleconferencing systems have been widely adopted in research and industry for communication between individuals or groups at various locations. The next generation of communication systems will enable real-time data-conferencing, where applications and their respective data are shared along with audio and video. Although some data-conferencing abilities have started being integrated into existing software, they are generally still in the early stages and do not allow for real-time collaboration on unrestricted data types. v
Proceedings of the 2016 ACM International Conference on Interactive Surfaces and Spaces, 2016
Classic visual analysis relies on a single medium for displaying and interacting with data. Large... more Classic visual analysis relies on a single medium for displaying and interacting with data. Large-scale tiled display walls, virtual reality using head-mounted displays or CAVE systems, and collaborative touch screens have all been utilized for data exploration and analysis. We present our initial findings of combining numerous display environments and input modalities to create an interactive multi-modal display space that enables researchers to leverage various pieces of technology that will best suit specific sub-tasks. Our main contributions are 1) the deployment of an input server that interfaces with a wide array of interaction devices to create a single uniform stream of data usable by custom visual applications, and 2) three realworld use cases of leveraging multiple display environments in conjunction with one another to enhance scientific discovery and data dissemination.
Proceedings of the 6th ACM International Symposium on Pervasive Displays, 2017
Scheduling conferences is a common task in both research and industry, which requires relatively ... more Scheduling conferences is a common task in both research and industry, which requires relatively small groups to collaborate and negotiate in order to solve an often-large logistical problem with many nuances. For large conferences, the process can take days and it is traditionally a manual procedure performed using physical tools such as whiteboards and sticky notes. We present the design and implementation of StickySchedule, a multiuser application for use on interactive large-scale shared displays to better enable groups to organize large conference-scheduling data. To evaluate our tool, we present observations from novice users, and authentic use cases with expert feedback from organizers who are heavily involved in large conference scheduling. The main contributions of our work are documenting the collaborative and competitive aspects of conference scheduling, creating a tool that incorporates successful features and addresses identified issues with prior works, and verifying the usefulness of our tool by observing and discussing a variety of use cases, in both collocated and remote-distributed settings.
Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing, 2017
Mixed presence collaboration involves remote collaboration between multiple collocated groups. Th... more Mixed presence collaboration involves remote collaboration between multiple collocated groups. This paper presents the design and results of a user study that focused on mixed presence collaboration using large-scale tiled display walls. The research was conducted in order to compare data synchronization schemes for multiuser visualization applications. Our study compared three techniques for sharing data between display spaces with varying constraints and affordances. The results provide empirical evidence that using data sharing techniques with continuous synchronization between the sites lead to improved collaboration for a search and analysis task between remotely located groups. We have also identified aspects of synchronized sessions that result in increased remote collaborator awareness and parallel task coordination. It is believed that this research will lead to better utilization of large-scale tiled display walls for distributed group work.
IEEE Transactions on Visualization and Computer Graphics, 2021
Surround-view panoramic images and videos have become a popular form of media for interactive vie... more Surround-view panoramic images and videos have become a popular form of media for interactive viewing on mobile devices and virtual reality headsets. Viewing such media provides a sense of immersion by allowing users to control their view direction and experience an entire environment. When using a virtual reality headset, the level of immersion can be improved by leveraging stereoscopic capabilities. Stereoscopic images are generated in pairs, one for the left eye and one for the right eye, and result in providing an important depth cue for the human visual system. For computer generated imagery, rendering proper stereo pairs is well known for a fixed view. However, it is much more difficult to create omnidirectional stereo pairs for a surround-view projection that work well when looking in any direction. One major drawback of traditional omnidirectional stereo images is that they suffer from binocular misalignment in the peripheral vision as a user's view direction approaches the zenith / nadir (north / south pole) of the projection sphere. This paper presents a real-time geometry-based approach for omnidirectional stereo rendering that fits into the standard rendering pipeline. Our approach includes tunable parameters that enable pole merging - a reduction in the stereo effect near the poles that can minimize binocular misalignment. Results from a user study indicate that pole merging reduces visual fatigue and discomfort associated with binocular misalignment without inhibiting depth perception.
Journal of Computational Science, 2019
Performing analysis or generating visualizations concurrently with high performance simulations c... more Performing analysis or generating visualizations concurrently with high performance simulations can yield great benefits compared to post-processing data. Writing and reading large volumes of data can be reduced or eliminated, thereby producing an I/O cost savings. One such method for concurrent simulation and analysis is in transit-streaming data from the resource running the simulation to a separate resource running the analysis. In transit analysis can be beneficial since computational resources may not have certain resources needed for visualization and analysis (e.g. GPUs) and to reduce the impact of performing analysis tasks to the run time of the simulation. When sending and receiving data in transit, data redistribution mechanisms are needed in order to support heterogeneous data layouts that may be required by the simulation and analysis applications. The work described in this paper compares two mechanisms for on-the-fly data redistribution when streaming data in parallel between two distributed memory applications. Our results show that it is often more advantageous to stream data in the same layout as the sender and redistribute data amongst processes on the receiving end than to stream data in the final layout needed by the receiver.
Future Generation Computer Systems, 2016
In this paper, we present SAGE2, a software framework that enables local and remote collaboration... more In this paper, we present SAGE2, a software framework that enables local and remote collaboration on Scalable Resolution Display Environments (SRDE). An SRDE can be any configuration of displays, ranging from a single monitor to a wall of tiled flat-panel displays. SAGE2 creates a seamless ultra-high resolution desktop across the SRDE. Users can wirelessly connect to the SRDE with their own devices in order to interact with the system. Many users can simultaneously utilize a drag-and-drop interface to transfer local documents and show them on the SRDE, use a mouse pointer and keyboard to interact with existing content that is on the SRDE and share their screen so that it is viewable to all. SAGE2 can be used in many configurations and is able to support many communities working with various types of media and high-resolution content, from research meetings to creative session to education. SAGE2 is browser-based, utilizing a web server to host content, WebSockets for message passing and HTML with JavaScript for rendering and interaction. Recent web developments, with the emergence of HTML5, have allowed browsers to use advanced rendering techniques without requiring plug-ins (canvas drawing, WebGL 3D rendering, native video player, etc.). One major benefit of browser-based software is that there are no installation requirements for users and it is inherently cross-platform. A user simply needs a web browser on the device he/she wishes to use as an interaction tool for the SRDE. This lowers considerably the barrier of entry to engage in meaningful collaboration sessions.
Computer Aided Chemical Engineering, 2012
In this paper we present a novel technique for 3D micro capillary bed model reconstruction and co... more In this paper we present a novel technique for 3D micro capillary bed model reconstruction and computational fluid-dynamics (CFD) calculation to simulate morphological and blood perfusion parameters. Major arterial and venous cerebral blood vessels were reconstructed from scanning electron microscope (SEM) images and vessels whose diameters are beyond the resolution of modern imaging techniques were grown from this base structure using our novel directed interactive growth algorithm (DIGA). 3D voronoi networks were used to represent the microvasculature capillary network that joins arterial vessels to adjacent draining veins. The resulting network is morphologically accurate to in vivo measurements of the functional unit with accurate measurements of vessel density (3.17%) and surface area to tissue volume ratio (5.84%). Perfusion patterns of supply to the functional unit and systemic pressure drops match those expected in living tissue and indicate the model is a good candidate for exploring the hemodynamic phenomenon of autoregulation.
CHI’20, Honolulu, HI, USA, 2020
Head mounted displays (HMDs) can provide users with an immersive virtual reality (VR) experience,... more Head mounted displays (HMDs) can provide users with an immersive virtual reality (VR) experience, but often are limited to viewing a single environment / data set at a time. In this position paper, we argue that co-located users in the real world can help provide additional context and steer virtual experiences. With the use of a separate canvas, such as a large-scale display wall, non-immersed users can view a multitude of contextual information. This information can be used to drive the VR user's interactions and lead to deeper understanding. We will highlight two digital humanities use cases that capture real locations using a 360°camera: 1) urban art and 2) urban community gardens. In both cases, HMDs allow users to view a space and its surroundings, while non-immersed users can help with tasks such as placing overlays with auxiliary information, navigating between related spaces, and directing the VR user's actions.