Eric Freudenthal - Academia.edu (original) (raw)

Papers by Eric Freudenthal

Research paper thumbnail of Work In Progress: Wireless Biomedical Data Collection A Laboratory To Prepare Students For Emerging Engineering Areas

2009 Annual Conference & Exposition Proceedings

The authors present different modules created between the Computer Science and Electrical Enginee... more The authors present different modules created between the Computer Science and Electrical Engineering programs for a new laboratory with a focus on wireless sensors applied toward biomedical data collection. Students in those programs typically have little exposure to the growing area of biomedical telemetry and control because most of their courses are restricted to classical discipline subjects. To address these motivational and technical needs, we are implementing a course with hands-on emphasis. The course exposes the students to the needs and the nature of interconnected biomedical systems, and engages them in the development of networked applications for embedded wireless devices. This elective course is being jointly offered by the Electrical Engineering and Computer Science departments beginning in the spring of 2009 and targets upper division undergraduate and graduate students from both departments. Prerequisites include a course in computer organization and proficiency with a high level imperative programming language. The planned laboratory modules expose the student to the process of designing a biomedical wireless data collection system where they are required to apply concepts from several areas. A team of instructors from CS, ECE and BME backgrounds will provide the foundation of basic concepts required and then the student teams will collaborate to the final design. The approach attempts to exemplify the type of work that could take place in a real application.

Research paper thumbnail of A Virtualized Network Teaching Laboratory

2009 Annual Conference & Exposition Proceedings

Since for most students, learning dramatically improves with hands-on experience, a good networki... more Since for most students, learning dramatically improves with hands-on experience, a good networking lab is an asset for teaching networks. However, building such a lab is usually a challenge. It requires costly equipment and flexible configurations that are often not compatible with the campus network. In this paper, we describe how we designed a network teaching lab based on virtual machines connected on a virtual network. An instructor can create a virtual network and make it available to students. Students can configure the network and run experiments as instructed. When the task is complete, the students can submit the result of their work.

Research paper thumbnail of A Creatively Engaging Introductory Course In Computer Science That Gently Motivates Exploration Of Advanced Mathematical Concepts

We describe reforms to a highly engaging algorithm-centric introductory course in media programmi... more We describe reforms to a highly engaging algorithm-centric introductory course in media programming offered to pre-engineering students at the University of Texas at El Paso, an urban Hispanic-serving institution (HSI) as part of a required entering students program. In order to become eligible to attend the introductory programming course that begins the computer science degree plan at UTEP (“CS-1”), a large fraction of incoming freshmen must attend several semesters of preparatory “pre calculus” math courses. Most of these students will have limited if any prior exposure to programming or engineering. The initial implementation of our course was intended solely to provide an engaging first experience with programming, and followed Mark Guzdial’s “Media Computation” curriculum. Dr. Guzdial’s curriculum has successfully engaged Liberal Arts students in programming through the creation of aesthetically motivated multimedia projects. Attendees in pre-engineering and pre-professional p...

Research paper thumbnail of A gentle introduction to addressing modes in a first course in computer organization

This paper describes the reform of a sophomore-level course in computer organization for the Comp... more This paper describes the reform of a sophomore-level course in computer organization for the Computer Science BS curriculum at The University of Texas at El Paso, where Java and integrated IDEs have been adopted as the first and primary language and development environments. This effort was motivated by faculty observations and industry feedback indicating that upper-division students and graduates were failing to achieve mastery of nongarbage-collected, strictly imperative languages, such as C. The similarity of C variable semantics to the underlying machine model enables simultaneous mastery of both C and assembly language programming and exposes implementation details that are difficult to teach independently, such as subroutine linkage and management of stack frames. An online lab manual has been developed for this course that is freely available for extension or use by other institutions. Our previous papers reported on pedagogical techniques for facilitating student understand...

Research paper thumbnail of How to gauge disruptions caused by garbage collection: Towards an efficient algorithm

Comprehensive garbage collection is employed on a variety of computing devices, including intelli... more Comprehensive garbage collection is employed on a variety of computing devices, including intelligent cell phones. Garbage collection can cause prolonged user-interface pauses. In order to evaluate and compare the disruptiveness of various garbage collection strategies, it is necessary to gauge disruptions caused by garbage collection. In this paper, we describe efficient algorithms for computing metrics useful for this purpose. 1 Formulation of the Problem Practical problem: need to minimize disruptions caused by garbage collection. In many computer-based systems – including mobile devices – it is necessary to periodically perform garbage collection. This computation can interfere with the progress of interactive programs. Garbage collection can cause intermittent prolonged pauses in gesture-driven user interfaces that can severely reduce their usability; see, e.g., [1, 2, 4]. Need to gauge the quality of different garbage collection strategies. To decrease the resulting nuisance, ...

Research paper thumbnail of Work In Progress: Adoption Of Ccs0 Computational Methods And Circuit Analysis Techniques Into An Introductory Programming Course For Electrical Engineers

We report on the content and early evaluation of a pilot for a revised introductory programming c... more We report on the content and early evaluation of a pilot for a revised introductory programming course for ECE students titled "Software Design I, modified." (SDIm.) SDIm incorporates pedagogical components from a course developed by our computer science department (CCS0) combined with an introduction to electric circuits and other ECE topics. SDIm is being developed in response to observations from several ECE faculty that many students, who attended the previously-offered courses in introductory C-programming and in computer organization, had struggled with minor programming assignments throughout the ECE curriculum. They also reported that fewer than 20% of students demonstrated mastery of programming in later senior courses. The CCS0 course employs a simple interpreted programming environment based on "Python." It uses simple small programs associated with mathematical and physical applications in order to illustrate the concepts of programming techniques. This intervention is based on the hypothesis that students will more quickly learn the fundamentals of programming using CCS0's pedagogical model and programming environment than with a conventional course in C, and that they will effectively transfer these understandings to the study of C during the second half of the same course. Furthermore, SDIm's inclusion of projects that examine the dynamic behavior of simple RLC circuits will reinforce key concepts taught in foundational ECE courses.

Research paper thumbnail of Integration Of C Into An Introductory Course In Machine Organization

2008 Annual Conference & Exposition Proceedings

We describe the reform of a fourth-semester course in computer organization in the Computer Scien... more We describe the reform of a fourth-semester course in computer organization in the Computer Science BS curriculum at the University of Texas at El Paso (UTEP), an urban minority-serving institution, where Java and integrated development environments (IDEs) have been adopted as the language and development environment used in the first three semesters of major coursework. This project was motivated by faculty observations at UTEP and elsewhere 1 and industry feedback indicating that upper-division students and graduates were achieving reduced mastery of imperative languages with explicit memory management (most notably C), scriptable command line interfaces, and the functions of compilers, assemblers, and linkers.

Research paper thumbnail of Using Programming to Strengthen Mathematics Learning in 9th Grade Algebra Classes

2013 ASEE Annual Conference & Exposition Proceedings

Dr. Lim's research interests are on students' problem-solving disposition and instructional strat... more Dr. Lim's research interests are on students' problem-solving disposition and instructional strategies to advance their ways of thinking. Dr. Lim is particularly interested in impulsive disposition, students' propensity to act out the first thing that comes to mind. Dr. Lim's research goal centers on helping students advance from impulsive disposition to analytic disposition. Dr. Lim and colleagues are currently developing, testing, and refining a survey instrument to assess students' impulsive-analytic disposition. They have been investigating instructional strategies, such as use of prediction items and classroom voting with clicker technology, to help students become aware of their impulsivity and to elicit and address mathematical misconceptions. Dr. Lim is also exploring the use of mathematical tasks to provoke students' intellectual need for the concepts they are expected to learn. Lately, Dr. Lim has been involved in the iMPaCT-Math project to investigate the use of programming activities to foster student learning of foundational algebraic concepts.

Research paper thumbnail of Planting the Seeds of Computational Thinking: An Introduction to Programming Suitable for Inclusion in STEM Curricula

2011 ASEE Annual Conference & Exposition Proceedings

Inadequate math preparation discourages many capable studentsespecially those from traditionally ... more Inadequate math preparation discourages many capable studentsespecially those from traditionally underrepresented groupsfrom pursuing or succeeding in STEM academic programs. iMPaCT is a family of-Media Propelled‖ courses and course enrichment activities that introduce students to-Computational Thinking.‖ iMPaCT integrates exploration of math and programmed computation by engaging students in the design and modification of tiny programs that render raster graphics and simulate familiar kinematics. Through these exercises, students gain experience and confidence with foundational math concepts necessary for success in STEM studies, and an understanding of programmed computation. This paper presents early results from our formal evaluation of semester-length iMPaCT courses indicating improved academic success in concurrently and subsequently attended math courses. They also indicate changes to the nature of student engagement with problem solving using mathematics. This paper also describes iMPaCT-STEM, a nascent effort of computer science and mathematics faculty to distill iMPaCT's pedagogy into sequences of short learning activities designed to teach and reinforce a variety of mathematical and kinematic concepts that can be directly integrated into math and science courses.

Research paper thumbnail of Preliminary Investigation of Mobile System Features Potentially Relevant to HPC

2016 4th International Workshop on Energy Efficient Supercomputing (E2SC), Nov 1, 2016

Energy consumption's increasing importance in scientific computing has driven an interest in deve... more Energy consumption's increasing importance in scientific computing has driven an interest in developing energy efficient high performance systems. Energy constraints of mobile computing has motivated the design and evolution of low-power computing systems capable of supporting a variety of compute-intensive user interfaces and applications. Others have observed the evolution of mobile devices to also provide high performance [14]. Their work has primarily examined the performance and efficiency of compute-intensive scientific programs executed either on mobile systems or hybrids of mobile CPUs grafted into non-mobile (sometimes HPC) systems [6, 12, 14]. This report describes an investigation of performance and energy consumption of a single scientific code on five high performance and mobile systems with the objective of identifying the performance and energy efficiency implications of a variety of architectural features. The results of this pilot study suggest that ISA is less significant than other specific aspects of system architecture in achieving high performance at high efficiency. The strategy employed in this study may be extended to other scientific applications with a variety of memory access, computation, and communication properties.

Research paper thumbnail of Why filtering out higher harmonics makes it easier to carry a tune

Applied Mathematical Sciences, 2019

This article is distributed under the Creative Commons by-nc-nd Attribution License.

Research paper thumbnail of dRBAC: distributed role-based access control for dynamic coalition environments

Proceedings 22nd International Conference on Distributed Computing Systems

Distributed Role-Based Access Control (dRBAC) is a scalable, decentralized trust-management and a... more Distributed Role-Based Access Control (dRBAC) is a scalable, decentralized trust-management and access-control mechanism for systems that span multiple administrative domains. dRBAC represents controlled actions in terms of roles, which are defined within the trust domain of one entity and can be transitively delegated to other roles within a different trust domain. dRBAC utilizes PKI to identify all entities engaged in trust-sensitive operations and to validate delegation certificates. The mapping of roles to authorized name spaces obviates the need to identify additional policy roots. dRBAC distinguishes itself from previous trust management and role-based access control approaches in its support for three features: (1) third-party delegations, which improve expressiveness by allowing an entity to delegate roles outside its namespace when authorized by an explicit delegation of assignment; (2) valued attributes, which modulate transferred access rights via mechanisms that assign and manipulate numerical values associated with roles; and (3) credential subscriptions, which enable continuous monitoring of established trust relationships using a pub/sub infrastructure to track the status of revocable credentials. This paper describes the dRBAC model, its scalable implementation using a graph-based model of credential discovery and validation, and its application in a larger security context.

Research paper thumbnail of VPAF: a flexible framework for establishing and monitoring prolonged authorization relationships

Proceedings of the 5th International ICST Conference on Collaborative Computing: Networking, Applications, Worksharing, 2009

We describe a generic framework for determining and monitoring access rights derived from credent... more We describe a generic framework for determining and monitoring access rights derived from credential documents. Distributed authorization systems intended to support collaborative coalitions (such as Trust Management systems) typically incorporate mechanisms to both validate credentials, and to determine authorization. This conjunction of distinct functions increases complexity of both components and limits overall flexibility. Furthermore, while authorization decisions frequently enable the commencement of a prolonged relationship, current authorization systems are designed to authorize instantaneous transactions and provide no mechanisms to detect and propagate revocation after an authorization decision is made. VPAF (a Validated and Prolonged Authorization Framework) will separate these duties in a manner that permits credential validation and authorization decisions to be managed separately. VPAF is intended to enable vigilant monitoring of prolonged authorization relationships that span mutually distrustful administrative domains such as is common when multiple organizations collaborate.

Research paper thumbnail of Interval Approach to Preserving Privacy in Statistical Databases: Related Challenges and Algorithms of Computational Statistics

Communications in Computer and Information Science

In many practical situations, it is important to store large amounts of data and to be able to st... more In many practical situations, it is important to store large amounts of data and to be able to statistically process the data. A large part of the data is confidential, so while we welcome statistical data processing, we do not want to reveal sensitive individual data. If we allow researchers to ask all kinds of statistical queries, this can lead to violation of people's privacy. A sure-proof way to avoid these privacy violations is to store ranges of values (e.g., between 40 and 50 for age) instead of the actual values. This idea solves the privacy problem, but it leads to a computational challenge: traditional statistical algorithms need exact data, but now we only know data with interval uncertainty. In this paper, we describe new algorithms designed for processing such interval data.

Research paper thumbnail of Fern: An Updatable Authenticated Dictionary Suitable for Distributed Caching

Communications in Computer and Information Science

Fern is an updatable cryptographically authenticated dictionary developed to propagate identifica... more Fern is an updatable cryptographically authenticated dictionary developed to propagate identification and authorization information within and among distributed systems. Conventional authenticated dictionaries permit authorization information to be disseminated by untrusted proxies, however these proxies must maintain full duplicates of the dictionary structure. In contrast, Fern incrementally distributes components of its dictionary as required to satisfy client requests and thus is suitable for deployments where clients are likely to require only a small fraction of a dictionary's contents and connectivity may be limited When dictionary components must be obtained remotely, the latency of lookup and validation operations is dominated by communication time. This latency can be reduced through the exploitation of localitysensitive caching of dictionary components. Fern dictionary's components are suitable for caching and distribution via autonomic scalable locality-aware Content Distribution Networks (CDNs) and therefore can provide these properties without requiring the provisioning of a dedicated distribution infrastructure, Others have proposed the construction of incrementally distributed authenticated dictionaries that utilize either trees that dynamically re-balance or skiplists. The structural changes that result from tree rebalancing can reduce the effectiveness of caching. Skiplists do not require balancing and thus are more amenable to caching. However a client lookup from a skiplist-based dictionary must sequentially transfer two-to-three times as many components as a client of a dictionary based on self-balancing trees. In both cases, these transfers are necessarily serialized, and thus skiplists will incur proportionally increased latency. Fern's dictionary structure utilizes a novel randomized trie that has the desirable characteristics of both of these approaches. While Fern's algorithm is far simpler than self-balancing trees, a Fern trie will have similarly short (average and expected worst case) path lengths, and thus requires that clients obtain approximately the same number of vertices. Furthermore, like skiplists, Fern's trie does not require rebalancing and thus is similarly amenable to caching. A prototype implementation of Fern has been constructed that utilizes the CoralCDN scalable, localityaware, and autonomic content distribution network. We provide an informal analysis of bandwidth requirements for the Fern authenticated dictionary that agrees with experimental results. We are not aware of other implemented systems with similar properties or comparable analysis of such systems' performance and bandwidth requirements. Finally, the potential integration of Fern within the CDN on which it relies could yield symbiotic benefits. The indexing infrastructure for autonomic CDNs such as Coral are vulnerable to disruption by malicious participants. Therefore, a CDN's integrity could be guarded against malicious interference through the dissemination of up-to-date authorization information provided by Fern. In a complementary manner, a CDN so fortified by Fern could potentially provide more reliable content distribution service to Fern and thus also improve Fern's availability and performance.

Research paper thumbnail of Switchboard: secure, monitored connections for client-server communication

Proceedings 22nd International Conference on Distributed Computing Systems Workshops

Prolonged secure communication requires trust relationships that extend throughout a connection's... more Prolonged secure communication requires trust relationships that extend throughout a connection's life cycle. Current tools to establish secure connections such as SSL/TLS and SSH authenticate PKI identities, validate credentials and authorize a trust relationship at the time a connection is established, but do not monitor the trust relationship thereafter. To maintain security over the duration of a prolonged connection, we extend the semantics of SSL to support continuous monitoring of a credential's liveness and the trust relationships that authorize it. Our implementation isolates trust management into a pluggable trust authorization module. We also present an initial design for a host-level secure communication resource that provides secure channels for multiple connections.

Research paper thumbnail of DisCo: middleware for securely deploying decomposable services in partly trusted environments

24th International Conference on Distributed Computing Systems, 2004. Proceedings., 2004

The DisCo middleware infrastructure facilitates the construction and deployment of decomposable a... more The DisCo middleware infrastructure facilitates the construction and deployment of decomposable applications for environments with dynamic network connectivity properties and unstable trust relationships spanning multiple administrative domains. Consumers of these services, who are mutually anonymous, must be able to discover, securely acquire the code for, and install service components over the network with only minimal a priori knowledge of their locations. Once installed, these components must be able to interoperate securely and reliably across the network. Solutions exist that address individual challenges posed by such an environment, but they rely upon mutually incompatible authorization models that are frequently insufficiently expressive. The primary contributions of DisCo are (1) a middleware toolkit for constructing such applications, (2) a unifying authorization abstraction, and (3) a realization of this authorization well suited for expressing partial trust relationships typical of such environments. This paper is primarily about the first two of these contributions, [7] presents the third.

Research paper thumbnail of Sloth-NFS and the possibility of using fuzzy control to optimize cache management

NAFIPS 2008 - 2008 Annual Meeting of the North American Fuzzy Information Processing Society, 2008

Research paper thumbnail of An Early Report on Challenges Related to Dissemination of Programming-Centric Mathematics Lessons into 9th Grade Algebra Classes

Dr. Lim's research interests are on students' problem-solving disposition and instructional strat... more Dr. Lim's research interests are on students' problem-solving disposition and instructional strategies to advance their ways of thinking. Dr. Lim is particularly interested in impulsive disposition, students' propensity to act out the first thing that comes to mind. Dr. Lim's research goal centers on helping students advance from impulsive disposition to analytic disposition. Dr. Lim and colleagues are currently developing, testing, and refining a survey instrument to assess students' impulsive-analytic disposition. They have been investigating instructional strategies, such as use of prediction items and classroom voting with clicker technology, to help students become aware of their impulsivity and to elicit and address mathematical misconceptions. Dr. Lim is also exploring the use of mathematical tasks to provoke students' intellectual need for the concepts they are expected to learn. Lately, Dr. Lim has been involved in the iMPaCT-Math project to investigate the use of programming activities to foster student learning of foundational algebraic concepts.

Research paper thumbnail of Reliable and fault tolerant distributed caching using memcached

Enormous network of computers, supporting various services in the public domain i.e. the Internet... more Enormous network of computers, supporting various services in the public domain i.e. the Internet or the World Wide Web, or in the private networks satisfying privileged audiences, require various intelligent techniques to enable high availability and reliability of the resources. Most of these services rely upon and serve as a query-response system, where they respond to or query appropriate host for a certain solution to a problem. For example, database server always listens on a designated port for structured query and responses them from the information it contains within. Since communication plays a major role, most of these intelligent techniques range from hardware level design to minimize overhead or latency in communication to software level optimization that reduces communication cost. Caching is one of the most used optimization techniques to reduce communication time in computing. Caching is also used in various forms in systems, with the same objective to minimize communication time. In a service oriented architecture, consisting of many interdependent systems, caching plays a major role towards performance. Consider a web-server serving http response by querying a database for supporting information. The service running at the database end can cache frequently used for faster response. This way the server responses faster achieving smoother response and better usability in the user end. In more complex system where the load is typically very high and acceptable response time to attain usability, a more generic caching mechanism may be used. As designed by Fitzpatrick [1], such a system, memcached can be used as a generic caching system that can cache virtually every kind of data as a stream and can respond very fast as it uses the system's idle memory to store the data, instead of the disk. Memcached is wiely used in many online services such as Wikipedia, YouTube, Craiglist, Digg, Flicker, Twitter and many others. All these services have one major attribute in common, high user activity and high availability, meaning these systems handle thousands of requests per second and they try to attain that 24/7 with no or minimum downtime. While trying out memcached, one major issue we noticed was, in spite of integrated in distributed network of systems the memcached server itself does not act as a distributed system itself. The system works as a stand-alone caching abstraction that can only reply with appropriate data if it is available in the cache in very low response time. This reduces the scalability of memcached by moving all the caching management part to the user end where memcached servers remain unaware of its neighbors. Furthermore, this particular approach increases risk of system availability in case of failure. While testing in a test environment using 10 memcached server and 1 client, the network performed as expected, but it failed drastically upon removal of a single node from the pool of systems. Thus we propose a high performance distributed caching system that will should try to heal itself when a system failure is detected. As discussed, currently the memcached server works stand-alone and is completely unaware of its surrounding neighbors. The current model forces the clients to implement an api to map the data to the server. Therefore, the system in concern, will have the following properties: a. Self-healing and rearrangement of the pool in case of a system failure b. Automatic redistribution of the cached data upon changes in pool structure This way the system will not only provide fast cached response but also will provide high availability even in times of system failure.

Research paper thumbnail of Work In Progress: Wireless Biomedical Data Collection A Laboratory To Prepare Students For Emerging Engineering Areas

2009 Annual Conference & Exposition Proceedings

The authors present different modules created between the Computer Science and Electrical Enginee... more The authors present different modules created between the Computer Science and Electrical Engineering programs for a new laboratory with a focus on wireless sensors applied toward biomedical data collection. Students in those programs typically have little exposure to the growing area of biomedical telemetry and control because most of their courses are restricted to classical discipline subjects. To address these motivational and technical needs, we are implementing a course with hands-on emphasis. The course exposes the students to the needs and the nature of interconnected biomedical systems, and engages them in the development of networked applications for embedded wireless devices. This elective course is being jointly offered by the Electrical Engineering and Computer Science departments beginning in the spring of 2009 and targets upper division undergraduate and graduate students from both departments. Prerequisites include a course in computer organization and proficiency with a high level imperative programming language. The planned laboratory modules expose the student to the process of designing a biomedical wireless data collection system where they are required to apply concepts from several areas. A team of instructors from CS, ECE and BME backgrounds will provide the foundation of basic concepts required and then the student teams will collaborate to the final design. The approach attempts to exemplify the type of work that could take place in a real application.

Research paper thumbnail of A Virtualized Network Teaching Laboratory

2009 Annual Conference & Exposition Proceedings

Since for most students, learning dramatically improves with hands-on experience, a good networki... more Since for most students, learning dramatically improves with hands-on experience, a good networking lab is an asset for teaching networks. However, building such a lab is usually a challenge. It requires costly equipment and flexible configurations that are often not compatible with the campus network. In this paper, we describe how we designed a network teaching lab based on virtual machines connected on a virtual network. An instructor can create a virtual network and make it available to students. Students can configure the network and run experiments as instructed. When the task is complete, the students can submit the result of their work.

Research paper thumbnail of A Creatively Engaging Introductory Course In Computer Science That Gently Motivates Exploration Of Advanced Mathematical Concepts

We describe reforms to a highly engaging algorithm-centric introductory course in media programmi... more We describe reforms to a highly engaging algorithm-centric introductory course in media programming offered to pre-engineering students at the University of Texas at El Paso, an urban Hispanic-serving institution (HSI) as part of a required entering students program. In order to become eligible to attend the introductory programming course that begins the computer science degree plan at UTEP (“CS-1”), a large fraction of incoming freshmen must attend several semesters of preparatory “pre calculus” math courses. Most of these students will have limited if any prior exposure to programming or engineering. The initial implementation of our course was intended solely to provide an engaging first experience with programming, and followed Mark Guzdial’s “Media Computation” curriculum. Dr. Guzdial’s curriculum has successfully engaged Liberal Arts students in programming through the creation of aesthetically motivated multimedia projects. Attendees in pre-engineering and pre-professional p...

Research paper thumbnail of A gentle introduction to addressing modes in a first course in computer organization

This paper describes the reform of a sophomore-level course in computer organization for the Comp... more This paper describes the reform of a sophomore-level course in computer organization for the Computer Science BS curriculum at The University of Texas at El Paso, where Java and integrated IDEs have been adopted as the first and primary language and development environments. This effort was motivated by faculty observations and industry feedback indicating that upper-division students and graduates were failing to achieve mastery of nongarbage-collected, strictly imperative languages, such as C. The similarity of C variable semantics to the underlying machine model enables simultaneous mastery of both C and assembly language programming and exposes implementation details that are difficult to teach independently, such as subroutine linkage and management of stack frames. An online lab manual has been developed for this course that is freely available for extension or use by other institutions. Our previous papers reported on pedagogical techniques for facilitating student understand...

Research paper thumbnail of How to gauge disruptions caused by garbage collection: Towards an efficient algorithm

Comprehensive garbage collection is employed on a variety of computing devices, including intelli... more Comprehensive garbage collection is employed on a variety of computing devices, including intelligent cell phones. Garbage collection can cause prolonged user-interface pauses. In order to evaluate and compare the disruptiveness of various garbage collection strategies, it is necessary to gauge disruptions caused by garbage collection. In this paper, we describe efficient algorithms for computing metrics useful for this purpose. 1 Formulation of the Problem Practical problem: need to minimize disruptions caused by garbage collection. In many computer-based systems – including mobile devices – it is necessary to periodically perform garbage collection. This computation can interfere with the progress of interactive programs. Garbage collection can cause intermittent prolonged pauses in gesture-driven user interfaces that can severely reduce their usability; see, e.g., [1, 2, 4]. Need to gauge the quality of different garbage collection strategies. To decrease the resulting nuisance, ...

Research paper thumbnail of Work In Progress: Adoption Of Ccs0 Computational Methods And Circuit Analysis Techniques Into An Introductory Programming Course For Electrical Engineers

We report on the content and early evaluation of a pilot for a revised introductory programming c... more We report on the content and early evaluation of a pilot for a revised introductory programming course for ECE students titled "Software Design I, modified." (SDIm.) SDIm incorporates pedagogical components from a course developed by our computer science department (CCS0) combined with an introduction to electric circuits and other ECE topics. SDIm is being developed in response to observations from several ECE faculty that many students, who attended the previously-offered courses in introductory C-programming and in computer organization, had struggled with minor programming assignments throughout the ECE curriculum. They also reported that fewer than 20% of students demonstrated mastery of programming in later senior courses. The CCS0 course employs a simple interpreted programming environment based on "Python." It uses simple small programs associated with mathematical and physical applications in order to illustrate the concepts of programming techniques. This intervention is based on the hypothesis that students will more quickly learn the fundamentals of programming using CCS0's pedagogical model and programming environment than with a conventional course in C, and that they will effectively transfer these understandings to the study of C during the second half of the same course. Furthermore, SDIm's inclusion of projects that examine the dynamic behavior of simple RLC circuits will reinforce key concepts taught in foundational ECE courses.

Research paper thumbnail of Integration Of C Into An Introductory Course In Machine Organization

2008 Annual Conference & Exposition Proceedings

We describe the reform of a fourth-semester course in computer organization in the Computer Scien... more We describe the reform of a fourth-semester course in computer organization in the Computer Science BS curriculum at the University of Texas at El Paso (UTEP), an urban minority-serving institution, where Java and integrated development environments (IDEs) have been adopted as the language and development environment used in the first three semesters of major coursework. This project was motivated by faculty observations at UTEP and elsewhere 1 and industry feedback indicating that upper-division students and graduates were achieving reduced mastery of imperative languages with explicit memory management (most notably C), scriptable command line interfaces, and the functions of compilers, assemblers, and linkers.

Research paper thumbnail of Using Programming to Strengthen Mathematics Learning in 9th Grade Algebra Classes

2013 ASEE Annual Conference & Exposition Proceedings

Dr. Lim's research interests are on students' problem-solving disposition and instructional strat... more Dr. Lim's research interests are on students' problem-solving disposition and instructional strategies to advance their ways of thinking. Dr. Lim is particularly interested in impulsive disposition, students' propensity to act out the first thing that comes to mind. Dr. Lim's research goal centers on helping students advance from impulsive disposition to analytic disposition. Dr. Lim and colleagues are currently developing, testing, and refining a survey instrument to assess students' impulsive-analytic disposition. They have been investigating instructional strategies, such as use of prediction items and classroom voting with clicker technology, to help students become aware of their impulsivity and to elicit and address mathematical misconceptions. Dr. Lim is also exploring the use of mathematical tasks to provoke students' intellectual need for the concepts they are expected to learn. Lately, Dr. Lim has been involved in the iMPaCT-Math project to investigate the use of programming activities to foster student learning of foundational algebraic concepts.

Research paper thumbnail of Planting the Seeds of Computational Thinking: An Introduction to Programming Suitable for Inclusion in STEM Curricula

2011 ASEE Annual Conference & Exposition Proceedings

Inadequate math preparation discourages many capable studentsespecially those from traditionally ... more Inadequate math preparation discourages many capable studentsespecially those from traditionally underrepresented groupsfrom pursuing or succeeding in STEM academic programs. iMPaCT is a family of-Media Propelled‖ courses and course enrichment activities that introduce students to-Computational Thinking.‖ iMPaCT integrates exploration of math and programmed computation by engaging students in the design and modification of tiny programs that render raster graphics and simulate familiar kinematics. Through these exercises, students gain experience and confidence with foundational math concepts necessary for success in STEM studies, and an understanding of programmed computation. This paper presents early results from our formal evaluation of semester-length iMPaCT courses indicating improved academic success in concurrently and subsequently attended math courses. They also indicate changes to the nature of student engagement with problem solving using mathematics. This paper also describes iMPaCT-STEM, a nascent effort of computer science and mathematics faculty to distill iMPaCT's pedagogy into sequences of short learning activities designed to teach and reinforce a variety of mathematical and kinematic concepts that can be directly integrated into math and science courses.

Research paper thumbnail of Preliminary Investigation of Mobile System Features Potentially Relevant to HPC

2016 4th International Workshop on Energy Efficient Supercomputing (E2SC), Nov 1, 2016

Energy consumption's increasing importance in scientific computing has driven an interest in deve... more Energy consumption's increasing importance in scientific computing has driven an interest in developing energy efficient high performance systems. Energy constraints of mobile computing has motivated the design and evolution of low-power computing systems capable of supporting a variety of compute-intensive user interfaces and applications. Others have observed the evolution of mobile devices to also provide high performance [14]. Their work has primarily examined the performance and efficiency of compute-intensive scientific programs executed either on mobile systems or hybrids of mobile CPUs grafted into non-mobile (sometimes HPC) systems [6, 12, 14]. This report describes an investigation of performance and energy consumption of a single scientific code on five high performance and mobile systems with the objective of identifying the performance and energy efficiency implications of a variety of architectural features. The results of this pilot study suggest that ISA is less significant than other specific aspects of system architecture in achieving high performance at high efficiency. The strategy employed in this study may be extended to other scientific applications with a variety of memory access, computation, and communication properties.

Research paper thumbnail of Why filtering out higher harmonics makes it easier to carry a tune

Applied Mathematical Sciences, 2019

This article is distributed under the Creative Commons by-nc-nd Attribution License.

Research paper thumbnail of dRBAC: distributed role-based access control for dynamic coalition environments

Proceedings 22nd International Conference on Distributed Computing Systems

Distributed Role-Based Access Control (dRBAC) is a scalable, decentralized trust-management and a... more Distributed Role-Based Access Control (dRBAC) is a scalable, decentralized trust-management and access-control mechanism for systems that span multiple administrative domains. dRBAC represents controlled actions in terms of roles, which are defined within the trust domain of one entity and can be transitively delegated to other roles within a different trust domain. dRBAC utilizes PKI to identify all entities engaged in trust-sensitive operations and to validate delegation certificates. The mapping of roles to authorized name spaces obviates the need to identify additional policy roots. dRBAC distinguishes itself from previous trust management and role-based access control approaches in its support for three features: (1) third-party delegations, which improve expressiveness by allowing an entity to delegate roles outside its namespace when authorized by an explicit delegation of assignment; (2) valued attributes, which modulate transferred access rights via mechanisms that assign and manipulate numerical values associated with roles; and (3) credential subscriptions, which enable continuous monitoring of established trust relationships using a pub/sub infrastructure to track the status of revocable credentials. This paper describes the dRBAC model, its scalable implementation using a graph-based model of credential discovery and validation, and its application in a larger security context.

Research paper thumbnail of VPAF: a flexible framework for establishing and monitoring prolonged authorization relationships

Proceedings of the 5th International ICST Conference on Collaborative Computing: Networking, Applications, Worksharing, 2009

We describe a generic framework for determining and monitoring access rights derived from credent... more We describe a generic framework for determining and monitoring access rights derived from credential documents. Distributed authorization systems intended to support collaborative coalitions (such as Trust Management systems) typically incorporate mechanisms to both validate credentials, and to determine authorization. This conjunction of distinct functions increases complexity of both components and limits overall flexibility. Furthermore, while authorization decisions frequently enable the commencement of a prolonged relationship, current authorization systems are designed to authorize instantaneous transactions and provide no mechanisms to detect and propagate revocation after an authorization decision is made. VPAF (a Validated and Prolonged Authorization Framework) will separate these duties in a manner that permits credential validation and authorization decisions to be managed separately. VPAF is intended to enable vigilant monitoring of prolonged authorization relationships that span mutually distrustful administrative domains such as is common when multiple organizations collaborate.

Research paper thumbnail of Interval Approach to Preserving Privacy in Statistical Databases: Related Challenges and Algorithms of Computational Statistics

Communications in Computer and Information Science

In many practical situations, it is important to store large amounts of data and to be able to st... more In many practical situations, it is important to store large amounts of data and to be able to statistically process the data. A large part of the data is confidential, so while we welcome statistical data processing, we do not want to reveal sensitive individual data. If we allow researchers to ask all kinds of statistical queries, this can lead to violation of people's privacy. A sure-proof way to avoid these privacy violations is to store ranges of values (e.g., between 40 and 50 for age) instead of the actual values. This idea solves the privacy problem, but it leads to a computational challenge: traditional statistical algorithms need exact data, but now we only know data with interval uncertainty. In this paper, we describe new algorithms designed for processing such interval data.

Research paper thumbnail of Fern: An Updatable Authenticated Dictionary Suitable for Distributed Caching

Communications in Computer and Information Science

Fern is an updatable cryptographically authenticated dictionary developed to propagate identifica... more Fern is an updatable cryptographically authenticated dictionary developed to propagate identification and authorization information within and among distributed systems. Conventional authenticated dictionaries permit authorization information to be disseminated by untrusted proxies, however these proxies must maintain full duplicates of the dictionary structure. In contrast, Fern incrementally distributes components of its dictionary as required to satisfy client requests and thus is suitable for deployments where clients are likely to require only a small fraction of a dictionary's contents and connectivity may be limited When dictionary components must be obtained remotely, the latency of lookup and validation operations is dominated by communication time. This latency can be reduced through the exploitation of localitysensitive caching of dictionary components. Fern dictionary's components are suitable for caching and distribution via autonomic scalable locality-aware Content Distribution Networks (CDNs) and therefore can provide these properties without requiring the provisioning of a dedicated distribution infrastructure, Others have proposed the construction of incrementally distributed authenticated dictionaries that utilize either trees that dynamically re-balance or skiplists. The structural changes that result from tree rebalancing can reduce the effectiveness of caching. Skiplists do not require balancing and thus are more amenable to caching. However a client lookup from a skiplist-based dictionary must sequentially transfer two-to-three times as many components as a client of a dictionary based on self-balancing trees. In both cases, these transfers are necessarily serialized, and thus skiplists will incur proportionally increased latency. Fern's dictionary structure utilizes a novel randomized trie that has the desirable characteristics of both of these approaches. While Fern's algorithm is far simpler than self-balancing trees, a Fern trie will have similarly short (average and expected worst case) path lengths, and thus requires that clients obtain approximately the same number of vertices. Furthermore, like skiplists, Fern's trie does not require rebalancing and thus is similarly amenable to caching. A prototype implementation of Fern has been constructed that utilizes the CoralCDN scalable, localityaware, and autonomic content distribution network. We provide an informal analysis of bandwidth requirements for the Fern authenticated dictionary that agrees with experimental results. We are not aware of other implemented systems with similar properties or comparable analysis of such systems' performance and bandwidth requirements. Finally, the potential integration of Fern within the CDN on which it relies could yield symbiotic benefits. The indexing infrastructure for autonomic CDNs such as Coral are vulnerable to disruption by malicious participants. Therefore, a CDN's integrity could be guarded against malicious interference through the dissemination of up-to-date authorization information provided by Fern. In a complementary manner, a CDN so fortified by Fern could potentially provide more reliable content distribution service to Fern and thus also improve Fern's availability and performance.

Research paper thumbnail of Switchboard: secure, monitored connections for client-server communication

Proceedings 22nd International Conference on Distributed Computing Systems Workshops

Prolonged secure communication requires trust relationships that extend throughout a connection's... more Prolonged secure communication requires trust relationships that extend throughout a connection's life cycle. Current tools to establish secure connections such as SSL/TLS and SSH authenticate PKI identities, validate credentials and authorize a trust relationship at the time a connection is established, but do not monitor the trust relationship thereafter. To maintain security over the duration of a prolonged connection, we extend the semantics of SSL to support continuous monitoring of a credential's liveness and the trust relationships that authorize it. Our implementation isolates trust management into a pluggable trust authorization module. We also present an initial design for a host-level secure communication resource that provides secure channels for multiple connections.

Research paper thumbnail of DisCo: middleware for securely deploying decomposable services in partly trusted environments

24th International Conference on Distributed Computing Systems, 2004. Proceedings., 2004

The DisCo middleware infrastructure facilitates the construction and deployment of decomposable a... more The DisCo middleware infrastructure facilitates the construction and deployment of decomposable applications for environments with dynamic network connectivity properties and unstable trust relationships spanning multiple administrative domains. Consumers of these services, who are mutually anonymous, must be able to discover, securely acquire the code for, and install service components over the network with only minimal a priori knowledge of their locations. Once installed, these components must be able to interoperate securely and reliably across the network. Solutions exist that address individual challenges posed by such an environment, but they rely upon mutually incompatible authorization models that are frequently insufficiently expressive. The primary contributions of DisCo are (1) a middleware toolkit for constructing such applications, (2) a unifying authorization abstraction, and (3) a realization of this authorization well suited for expressing partial trust relationships typical of such environments. This paper is primarily about the first two of these contributions, [7] presents the third.

Research paper thumbnail of Sloth-NFS and the possibility of using fuzzy control to optimize cache management

NAFIPS 2008 - 2008 Annual Meeting of the North American Fuzzy Information Processing Society, 2008

Research paper thumbnail of An Early Report on Challenges Related to Dissemination of Programming-Centric Mathematics Lessons into 9th Grade Algebra Classes

Dr. Lim's research interests are on students' problem-solving disposition and instructional strat... more Dr. Lim's research interests are on students' problem-solving disposition and instructional strategies to advance their ways of thinking. Dr. Lim is particularly interested in impulsive disposition, students' propensity to act out the first thing that comes to mind. Dr. Lim's research goal centers on helping students advance from impulsive disposition to analytic disposition. Dr. Lim and colleagues are currently developing, testing, and refining a survey instrument to assess students' impulsive-analytic disposition. They have been investigating instructional strategies, such as use of prediction items and classroom voting with clicker technology, to help students become aware of their impulsivity and to elicit and address mathematical misconceptions. Dr. Lim is also exploring the use of mathematical tasks to provoke students' intellectual need for the concepts they are expected to learn. Lately, Dr. Lim has been involved in the iMPaCT-Math project to investigate the use of programming activities to foster student learning of foundational algebraic concepts.

Research paper thumbnail of Reliable and fault tolerant distributed caching using memcached

Enormous network of computers, supporting various services in the public domain i.e. the Internet... more Enormous network of computers, supporting various services in the public domain i.e. the Internet or the World Wide Web, or in the private networks satisfying privileged audiences, require various intelligent techniques to enable high availability and reliability of the resources. Most of these services rely upon and serve as a query-response system, where they respond to or query appropriate host for a certain solution to a problem. For example, database server always listens on a designated port for structured query and responses them from the information it contains within. Since communication plays a major role, most of these intelligent techniques range from hardware level design to minimize overhead or latency in communication to software level optimization that reduces communication cost. Caching is one of the most used optimization techniques to reduce communication time in computing. Caching is also used in various forms in systems, with the same objective to minimize communication time. In a service oriented architecture, consisting of many interdependent systems, caching plays a major role towards performance. Consider a web-server serving http response by querying a database for supporting information. The service running at the database end can cache frequently used for faster response. This way the server responses faster achieving smoother response and better usability in the user end. In more complex system where the load is typically very high and acceptable response time to attain usability, a more generic caching mechanism may be used. As designed by Fitzpatrick [1], such a system, memcached can be used as a generic caching system that can cache virtually every kind of data as a stream and can respond very fast as it uses the system's idle memory to store the data, instead of the disk. Memcached is wiely used in many online services such as Wikipedia, YouTube, Craiglist, Digg, Flicker, Twitter and many others. All these services have one major attribute in common, high user activity and high availability, meaning these systems handle thousands of requests per second and they try to attain that 24/7 with no or minimum downtime. While trying out memcached, one major issue we noticed was, in spite of integrated in distributed network of systems the memcached server itself does not act as a distributed system itself. The system works as a stand-alone caching abstraction that can only reply with appropriate data if it is available in the cache in very low response time. This reduces the scalability of memcached by moving all the caching management part to the user end where memcached servers remain unaware of its neighbors. Furthermore, this particular approach increases risk of system availability in case of failure. While testing in a test environment using 10 memcached server and 1 client, the network performed as expected, but it failed drastically upon removal of a single node from the pool of systems. Thus we propose a high performance distributed caching system that will should try to heal itself when a system failure is detected. As discussed, currently the memcached server works stand-alone and is completely unaware of its surrounding neighbors. The current model forces the clients to implement an api to map the data to the server. Therefore, the system in concern, will have the following properties: a. Self-healing and rearrangement of the pool in case of a system failure b. Automatic redistribution of the cached data upon changes in pool structure This way the system will not only provide fast cached response but also will provide high availability even in times of system failure.