ERSA’11 Conference Programme


alert

This part is under re-construction to fit the final program

N/A

Formal Methods and Engineering: Tools and Theory

Prof. David Lorge Parnas, "How Engineering Mathematics can Improve Software"

ERSA/WORLDCOMP KEYNOTE TALK

Professor David Lorge Parnas
How Engineering Mathematics can Improve Software,
Prof. David Lorge Parnas,
Middle Road Software, Inc, Canada
Time: 08:50 - 09:45am
Location: Lance Burton Theater
Prof. David Lorge Parnas, Ph.D., P.Eng. (Ontario)
Dr.h.c.: ETH Zürich, Louvain, Lugano
Fellow: RSC, ACM, CAE, GI, IEEE; MRIA
Professor Emeritus, CAS, Engineering, McMaster University
Hamilton, Ontario, Canada
Professor Emeritus, CSIS, University of Limerick
Limerick, Ireland

Abstract

For many decades we have been promised that the “Formal Methods” developed by computer scientists would bring about a drastic improvement in the quality and cost of software development. That improvement has not materialized. We review the reasons for this failure. We then explain the difference between the notations that are used in formal methods and the mathematics that is essential in other areas of Engineering. Finally, we illustrate the way that Engineering Mathematics has proven useful in a variety of software projects.

Slides

How Engineering Mathematics can Improve Software

Bio

Dr David Lorge Parnas has been studying industrial software development since 1969. Many of his papers have been found to have lasting value. For example, a paper written 25 years ago, based on a study of avionics software, was recently awarded a SIGSOFT IMPACT award.

Parnas has won more than 20 awards for his contributions. In 2007, Parnas was proud to share the IEEE Computer Society's one-time sixtieth anniversary award with computer pioneer Professor Maurice Wilkes of Cambridge University.

Parnas received his B.S., M.S. and Ph.D. in Electrical Engineering from Carnegie Mellon University. and honorary doctorates from the ETH in Zürich (Switzerland), the Catholic University of Louvain (Belgium), the University of Italian Switzerland (Lugano), and the Technische Universität Wien (Austria). He is licensed as a Professional Engineer in Ontario.

Parnas is a Fellow of the Royal Society of Canada (RSC), the Association for Computing Machinery (ACM), the Canadian Academy of Engineering (CAE), the Gesellschaft für Informatik (GI) in Germany and the IEEE. He is a Member of the Royal Irish Academy.

Parnas is the author of more than 270 papers and reports. Many have been repeatedly republished and are considered classics. Among those that have won awards are:

  • “Designing Software for Ease of Extension and Contraction” IEEE Transactions on Software Engineering, March 1979, which received the 1979 “Best Paper” award of the ACM in 1979 and the “Most Influential Paper from ICSE 3” award ten years later.
  • “The Modular Structure of Complex Systems”, IEEE Transactions on Software Engineering, March 1985, (with David Weiss and Paul Clements) received the “Best Paper from ICSE 7” award at the 17th International Conference on Software Engineering. and the 2008 ACM SIGSOFT Impact Paper Award
  • “Software Aging”, in Proceedings of the 16th International Conference on Software Engineering, Sorento Italy, May 16 - 21 1994, which received the 2010 ACM SIGSOFT Impact Paper Award

A collection of his papers can be found in:

Hoffman, D.M., Weiss, D.M. (eds.), “Software Fundamentals: Collected Papers by David L. Parnas”, Addison-Wesley, 2001, 664 pgs., ISBN 0-201-70369-6.

Dr. Parnas is Professor Emeritus at McMaster University in Hamilton Canada,and at the University of Limerick Ireland and an Honorary Professor at Ji Lin University in China. He is President of Middle Road Software.

Prof. David Andrews, "Develop Flows and Run Time Systems ..."

Prof. David Andrews
Design Flows and Run Time Systems for Heterogeneous Multiprocessor Systems on Programmable Chips (MPSOPCS)
Prof. David Andrews,
University of Arkansas, USA
Time: 01:20 - 01:40pm
Location: Gold Room

Abstract:

Emerging platform FPGAs will contain over 1 million LUTs, enough to support hundreds of soft programmable cores. This level of integration opens the potential for designers to switch approaches for performance increases from tedious accelerator point designs to more portable and efficient software based scalable parallel processing. This also brings the use of platform FPGA's in line with modern heterogeneous manycore systems on chips. Clearly exciting, standardized design flows and programming models to enable designers to exploit this potential on platform FPGA's does not yet exist. To achieve true software-like levels of productivity, the design flow and development environment for heterogeneous MPSoCs must resemble that of standard homogeneous multiprocessor systems. In this talk we first present the challenges that must be addressed to creating appropriate design flows and run-time systems for heterogeneous MPSoPC's. We then present our approach to enable developers to guide the construction and program a heterogeneous MPSoC using standard POSIX-compatible programming abstractions. The ability to use a standard programming model is achieved by using a hardware-based microkernel to provide OS services to all heterogeneous components. This approach makes programming heterogeneous MPSoCs transparent, and can increase programmer productivity by replacing synthesis of custom components with faster compilation of heterogeneous executables. The use of a hardware microkernel provides OS services in an ISA-neutral manner, which allows for seamless synchronization and communication amongst heterogeneous threads.

Bio:

David Andrews holds the Mullins Endowed Chair of Computer Engineering in the Computer Science and Computer Engineering (CSCE) Department at the University of Arkansas. Dr. Andrews received his B.S.E.E. and M.S.E.E. from the University of Missouri-Columbia and PhD in Computer Science from Syracuse University. He worked as a senior systems engineer and research scientist at General Electrics Electronics Laboratory and Advanced Technology Laboratory in Syracuse New York from 1985-1991 where he performed research for GE's Aerospace Business Group. During this time, he performed research on advanced signal processing systems including one of the first out of order execution dual fetch RISC microprocessors, and led the development of the operating system for the Seawolf submarine, a 128 node real time distributed message passing system. Since 1991, he has held faculty positions at the University of Kansas and University of Arkansas. His research interests are in the broad area of parallel computer architectures, operating systems, and programming models for real time and embedded systems.

N/A

Reconfigurable and Evolvable Hardware Architectures

Prof. Dominique Lavenier, "Next Generation Sequencing..."

Prof. Dominique Lavenier
Next Generation Sequencing Data Processing
How reconfigurable computing can help?,
Prof. Dominique Lavenier,
IRISA, Rennes, France
Time: 02:30pm - 03:00pm
Location: Gold Room

Abstract:

With the fast progress of next generation sequencing (NGS) machines, genomics research is currently strongly shaken. These new biotechnologies generate impressive flow of raw genomic data from which pertinent and significant information must be extracted. To sustain this high data processing throughput, parallelism is the only way. Today, two major challenges must be considered: (1) develop new parallel algorithms for new applications coming from the wide possibilities open by NGS technologies; (2) develop new parallel architectures as an alternative to huge clusters currently used in bioinformatics computing centers. Reconfigurable computing can address these two challenges by providing dedicated parallel algorithms tailored to ad hoc hardware. The talk will present the current NGS technologies, the standard associated treatments and the challenges RC should be able to bring efficient solutions.

Bio:

Dominique Lavenier is a senior CNRS (French National Center for Scientific Research) researcher and Professor at ENS Cachan. He is currently leading the Symbiose bioinformatics research group at IRISA. He was the recipient of the “Médaille de Bronze” of the French Council for Research CNRS in 1992, and got the French Cray Prize (1996) in algorithm, architecture, and micro-electronic. From August 1999 to August 2000, he has been working in the Nonproliferation and International Security Division at the Los Alamos National Laboratory, NM, USA. He has designed several dedicated hardware accelerators for bioinformatics and genomic processing. His research interests include hardware design, parallel architecture, parallel algorithm, bioinformatics, genomic, string processing (molecular biology) and signal processing.

Prof. Jeremy Buhler, "Systolic Arrays for Biosequence ..."

Prof. Jeremy Buhler
Design-Space Exploration of Systolic Arrays for Biosequence Algorithms,
Prof. Jeremy Buhler, Prof. Roger Chamberlain, and Dr. Arpith Jacob,
Washington University in St. Louis., USA
Time: 03:50 - 04:10pm
Location: Gold Room

Abstract:

The last decade has seen a revolution in technologies to sequence large amounts of DNA. Multiple order-of-magnitude improvements in speed and cost have made direct sequencing the biologist’s choice not only for determining the DNA sequence of a genome but also for studying the expression and control of its genes. However, this boon for biology is potentially a bane for bioinformatics, as growth in the volume of sequence data outstrips improvements in the compute power of CPU-based computing platforms.

Reconfigurable computing is an attractive alternative to conventional systems for implementing sequence-based bioinformatics tools. Many important algorithms for tasks such as biosequence comparison, sequence-model alignment, and RNA structure prediction can be described by dynamic programming (DP) recurrences, which can be realized as highly parallel systolic arrays [4]. These arrays should map straightforwardly to implementations on reconfigurable hardware that greatly outperform conventional CPU-based implementations. In practice, however, a programmer must search among numerous possible array designs to find one that balances conflicting needs for high performance, bounded resource utilization, and ability to handle problems of realistic size. The demands of this difficult balancing act can make high-performance array implementation practically inaccessible to all but a few expert designers.

In this work, we describe our recent progress in simplifying and automating the exploration of array design space, with application to realizing DP recurrences from bioinformatics. Using the tools of polyhedral theory [1], we describe the space of all systolic arrays for a particular recurrence and formulate optimization criteria that allow us to search efficiently for good point designs in this space. We have demonstrated the utility of our methods by building high-performance FPGA implementations of the Nussinov [5] and Zuker [6] algorithms for secondary structure prediction in RNAs.

An important feature of many bioinformatics problems is that performance is measured not by latency of one operation (e.g. comparing two sequences or folding one RNA) but by throughput, the rate at which a large number of operations can be completed. Explicitly optimizing for throughput leads to a surprisingly easy-to-automate procedure for array design-space search; we have used this procedure to discover novel, highly efficient FPGA realizations of the Nussinov algorithm [2]. Using these ideas and others related to partitioned array design, we have constructed an FPGA implementation of the more complex Zuker algorithm [3] that outperforms conventional CPU implementations by over 100-fold.

Finally, we present a vision, based on our work, for automated compilation of common DP recurrences in bioinformatics to efficient fine-grained parallel implementations. Such a compiler would put the power of reconfigurable computing in the hands of the typical bioinformatics programmer and provide a boost to bioinformaticians in their ongoing race to keep up with advances in sequencing technology.

References

  1. A. Darte, R. Schreiber, B. R. Rau, and F. Vivien. Constructing and exploiting linear schedules with prescribed parallelism. ACM Trans. Design Automation of Electronic Systems, 7(1):159-172, 2002.
  2. A. Jacob, J. Buhler, and R. Chamberlain. Design of throughput-optimized arrays from recurrence abstractions. In Proc. 21st IEEE Intl. Conf. Application-Specific Architectures and Processors, pages 133-140, 2010.
  3. A. Jacob, J. Buhler, and R. Chamberlain. Rapid RNA folding: analysis and acceleration of hte zuker recurrence. In Proc. 18th IEEE Ann. Intl. Symp. Programmable Custom Computing Machines, pages 87-94, 2010.
  4. D. Lavenier, P. Quinton, and S. Rajopadhye. Advanced Systolic Design, chapter 23. CRC Press, 1999.
  5. R. Nussinov, G. Pieczenik, J. R. Griggs, and D. J. Kleitman. Algorithms for loop matchings. SIAM Journal on Applied Mathematics, 35(1):68-82, 1978.
  6. M. Zuker. Computer prediction of RNA structure. Methods in Enzymology, 180:262-288, 1989.

Bio:

Jeremy Buhler is an associate professor in the Dept. of Computer Science and Engineering at Washington University in St. Louis. He leads the department’s High Performance Computational Biology Group. His research interests include acceleration of algorithms for biological sequence comparison, mapping of bioinformatics algorithms onto fine-grained parallel architectures, and developing new bioinformatics tools in a variety of areas. He received his BA in Computer Science from Rice University and his MS and PhD in Computer Science from the University of Washington.

Bio:

Prof. Roger D. Chamberlain

Roger D. Chamberlain is an associate professor in the Dept. of Computer Science and Engineering at Washington University in St. Louis. His research interests include specialized computer architectures for a variety of applications (e.g., astrophysics and biology), high-performance parallel and distributed application development, energy-efficient computation, and high-capacity I/O systems. He received his BSCS, BSEE, MSCS, and DSc degrees all from Washington University and is a member of IEEE and ACM.

Bio:

Dr. Arpith Chacko Jacob

Arpith Chacko Jacob is a postdoctoral researcher at IBM’s Watson Research Center in Yorktown Heights, New York. His research interests include computer architecture, parallelizing compilers, reconfigurable computing, and biosequence algorithms. He received his Bachelor of Engineering degree from University of Madras, India in 2003 and his MS and PhD degrees in Computer Science from Washington University in St. Louis in 2010.

Prof. Andy Tyrrell, "Reconfigurable and Evolvable Architectures ..."

Prof. Andy Tyrrell
Reconfigurable and Evolvable Architectures and their role in Designing Computational Systems,
Prof. Andy Tyrrell,
Department of Electronics, The University of York, UK
Time: 03:20pm - 03:50pm
Location: Gold Room

Abstract:

Biological inspiration in the design of computing machines finds its source in essentially three biological models: phylogenesis, the history of the evolution of the species, ontogenesis, the development of an individual as directed by his genetic code, and epigenesis, the development of an individual through learning processes influenced both by their genetic code and by the environment. These three models share a common basis: a one-dimensional description of the organism, the genome. If one would like to implement some or all of these ideas in hardware can we use COTS or do we need specifically designed-for-purpose devices? This talk will consider some historical work on bio-inspired architectures before moving on to consider a new device designed and built specifically for bio-inspired work. It will consider some of the novel features present in this device, such as self-configuration and dynamic routing, which assist the implementation of ontogenetic capabilities such as development, self-repair and self-replication.

Bio:

Andy Tyrrell received a 1st class honours degree in 1982 and a PhD in 1985, both in Electrical and Electronic Engineering. He joined the Electronics Department at York University in April 1990, he was promoted to the Chair of Digital Electronics in 1998. Between August 1987 and August 1988 he was visiting research fellow at Ecole Polytechnic Lausanne Switzerland. His main research interests are in the design of biologically-inspired architectures, artificial immune systems, evolvable hardware, and FPGA system design,. In particular, over the last 7 years his research group at York have concentrated on bio-inspired systems. This work has included the creation of embryonic processing array, intrinsic evolvable hardware systems and the immunotronics hardware architecture. He is Head of the Intelligent Systems research group at York. He has published over 250 papers in these areas. He is a Senior member of the IEEE, and a Fellow of the IET.

Dr. Eric Stahlberg, "Heterogeneous Accelerated Bioinformatics ..."

Dr. Eric Stahlberg
Heterogeneous Accelerated Bioinformatics — Perspectives for Impacting Cancer Research and Treatment,
Dr. Eric A. Stahlberg,
US National Cancer Institute, USA

Time: 04:10 - 04:30pm
Location: Gold Room

Abstract:

The presentation will provide insight into the areas where reconfigurable and fpga-based computing can contribute to accelerating bioinformatics applications for cancer research and treatment in the context of heterogeneous computing environments as are being discussed for exascale and extremescale systems. Experiences with application development using existing heterogeneous environments will be discussed and contrasted. Challenges and opportunities for reconfigurable computing to impact rapidly effective use of emerging experimental techniques such as next generation sequencing will also be presented.

Bio:

Eric Stahlberg is the director for the bioinformatics core at the US National Cancer Institute.

Eric Stahlberg was recently a visiting computational scientist employed by Wittenberg University where he directs the institution’s efforts in computational science while also serving as president of OpenFPGA Inc. a non-profit organization helping standards develop for easier application development for FPGAs. He has twenty years of experience in high-performance computing and applications in academia and industry, frequently involving the development and use of standards for easier portability and maintainability of high-performance software applications. An innovator, Dr. Stahlberg has helped bring forward many initiatives in software validation, bioinformatics, health information and computational chemistry. His latest efforts concentrate on helping adapt computer science curricula to broadly embrace parallel and accelerated computing at all levels.

Prof. Jim Tørresen, "Run-time Reconfigurable Hardware ..."

Prof. Jim Tørresen
Can Run-time Reconfigurable Hardware be more Accessible?
Prof. Jim Tørresen,
University of Oslo, Norway
currently a visiting professor at Cornell University
Time: 01:40 - 02:00pm
Location: Gold Room

Abstract:

This talk will describe how we are applying FPGA technology for designing high performance run-time reconfigurable computing architectures. This is research undertaken through the project named Context Switching Reconfigurable Hardware for Communication Systems (COSRECOS), funded by the Research Council of Norway for 2009 — 2013.

The overall goal of the project is to contribute in making run-time reconfigurable systems more feasible in general. This includes introducing architectures for reducing reconfiguration time as well as undertaking tool development. Case studies by applications in network and communication systems will be a part of the project. The talk includes how we plan to address the challenge of changing hardware configurations while a system is in operation as well as giving an overview of promising initial results so far.

Bio:

Jim Torresen received his M.Sc. and Dr.ing. (Ph.D) degrees in computer architecture and design from the Norwegian University of Science and Technology, University of Trondheim in 1991 and 1996, respectively. He has been employed as a senior hardware designer at NERA Telecommunications (1996-1998) and at Navia Aviation (1998-1999). Since 1999, he has been a professor at the Department of Informatics at the University of Oslo (associate professor 1999-2005). Jim Torresen has been a visiting researcher at Kyoto University, Japan for one year (1993-1994), four months at Electrotechnical laboratory, Tsukuba, Japan (1997 and 2000) and is now a visiting professor at Cornell University.

His research interests at the moment include bio-inspired computing, machine learning, reconfigurable hardware, robotics and applying this to complex real-world applications. Several novel methods have been proposed. He has published a number of scientific papers in international journals, books and conference proceedings. 10 tutorials and several invited talks have been given at international conferences. He is in the program committee of more than ten different international conferences as well as a regular reviewer of a number of international journals. He has also acted as an evaluator for proposals in EU FP7.

A list and collection of publications can be found at the following web page: http://www.ifi.uio.no/~jimtoer/papers.html

More information on the web:
http://www.ifi.uio.no/~jimtoer
http://www.matnat.uio.no/forskning/prosjekter/crc

Prof. Pao-Ann Hsiung, "A Self-Adaptive Hardware-Software System ..."

Prof. Pao-Ann Hsiung
SAHA: A Self-Adaptive Hardware-Software System Architecture for Ubiquitous Computing Applications,
Prof. Pao-Ann Hsiung and Chun-Hsian Huang,
National Chung Cheng University, Taiwan
Time: 02:00 - 02:20pm
Location: Gold Room

Abstract:

In ubiquitous computing environments, services and devices must be dynamically adapted to changing conditions and requirements. Thus, system adaptivity becomes a key requirement in providing better system performance. Existing ubiquitous computing systems either only support software adaptation or limit the usage of reconfigurable hardware designs as conventional hardware devices. As a result, system adaptivity and performance are quite restricted. To provide a more robust system adaptation, in this talk, we will propose a self-adaptive hardware/software system architecture (SAHA) that consists of service suppliers, hardware adapter, system manager, observer, and reconfigurable hardware architecture. SAHA supports both hardware preemption and hardware virtualization within a complete self-aware system adaptation mechanism such that the utilization of system resources is enhanced and better performance is provided for ubiquitous computing applications. Experiments with a ubiquitous computing service for information encryption demonstrate that SAHA can reduce the turnaround time by at least 11.9% of that required by using the conventional method.

Bio:

Pao-Ann Hsiung, Ph.D., received his B.S. in Mathematics and his Ph.D. in Electrical Engineering from the National Taiwan University, Taiwan, ROC, in 1991 and 1996, respectively. From 2001 to 2002, he was an assistant professor and from 2002 to 2007 he was an associate professor in the Department of Computer Science and Information Engineering, National Chung Cheng University, Taiwan, ROC. Since August 2007, he has been a full professor. He was a recipient of the 2010 Outstanding Research Award, 2004 Young Scholar Research Award, National Chung Cheng University, 2001 ACM Taipei Chapter Kuo-Ting Li Young Researcher for his significant contributions to design automation of electronic systems. He also received the advisor awards for Best Master Theses in eight continuous years (2002-2009), embedded system competitions, and RFID design competitions. Dr. Hsiung has published more than 200 papers in international journals and conferences. He is a senior member of the IEEE, a senior member of the ACM, and a life member of the IICM. He has been included in several professional listings such as Marquis’ Who’s Who in theWorld, Marquis’ Who’s Who in Asia, Outstanding People of the 20th Century by International Biographical Centre, Cambridge, England, Rifacimento International’s Admirable Asian Achievers, Afro/Asian Who’s Who, and Asia/Pacific Who’s Who. Dr. Hsiung is an editorial board member of six International Journals including the International Journal of Advancements in Computing Technology, AICIT, International Journal of Embedded Systems, Inderscience Publishers, USA; the International Journal of Multimedia and Ubiquitous Engineering, Science and Engineering Research Center, USA; Journal of Software Engineering, Academic Journals, Inc., USA; Open Software Engineering Journal, Bentham Science Publishers, Ltd., USA; International Journal of Patterns. Dr. Hsiung has been on the program committee of more than 80 international conferences. He served as organizer for PDPTA’99, RTC’99, DSVV’2000, PDES’2005, WoRMES’2009, ITNG’2010, ITNG’2011, AVTA’2011, and ERSA’2011. Dr. Hsiung’s main research interests include reconfigurable computing and system design, multi-core programming, cognitive radio architecture, System-on-Chip (SoC) design and verification, embedded software synthesis and verification, real-time system design and verification, hardware-software codesign and coverification, and component-based object-oriented application frameworks for real-time embedded systems.

Bio

Chun-Hsian Huang

Chun-Hsian Huang received his B.S. degree in Information and Computer Education from National TaiTung University, TaiTung, Taiwan, ROC, in 2004. He is currently working toward his Ph.D. in the Department of Computer Science and Information Engineering at National Chung Cheng University, Chiayi, Taiwan, ROC. He is a student member of IEEE and ACM. His research interests include reconfigurable computing and system design, hardware/software codesign and coverification, UML-based embedded system design methodology, NoC-based architecture design, ubiquitous computing, and formal verification.

N/A

Security: Threats and Solutions

Prof. Eugene Howard Spafford, "The Nature of Cyber Security"

ERSA/WORLDCOMP KEYNOTE TALK

Prof. Eugene Howard Spafford
The Nature of Cyber Security,
Prof. Eugene Howard Spafford,
Purdue University, USA
Leading computer security expert
Time: 09:50 - 10:45am
Location: Lance Burton Theater

Abstract:

There is an on-going discussion about establishing a scientific basis for cyber security. Efforts to date have often been ad hoc and conducted without any apparent insight into deeper formalisms. The result has been repeated system failures, and a steady progression of new attacks and compromises.

A solution, then, would seem to be to identify underlying scientific principles of cyber security, articulate them, and then employ them in the design and construction of future systems. This is at the core of several recent government programs and initiatives.

But the question that has not been asked is if “cyber security” is really the correct abstraction for analysis. There are some hints that perhaps it is not, and that some other approach is really more appropriate for systematic study — perhaps one we have yet to define.

In this talk I will provide some overview of the challenges in cyber security, the arguments being made for exploration and definition of a science of cyber security, and also some of the counterarguments. The goal of the presentation is not to convince the audience that either viewpoint is necessarily correct, but to suggest that perhaps there is sufficient doubt that we should carefully examine some of our assumptions about the field.

Bio:

Eugene Howard Spafford is a Professor in the Purdue University. He is historically significant Internet figure, he is renowned for first analyzing the Morris Worm, one of the earliest computer worms, and his prominent role in the Usenet backbone cabal. Spafford was a member of the President's Information Technology Advisory Committee 2003-2005,[2] has been an advisor to the National Science Foundation (NSF), and serves as an advisor to over a dozen other government agencies and major corporations.

Spafford attended State University of New York at Brockport for three years and completed his B.A. with a double major in mathematics and computer science in that time. He then attended the School of Information and Computer Sciences (now the College of Computing) at the Georgia Institute of Technology. He received his M.S. in 1981, and Ph.D. in 1986 for his design and implementation of the original Clouds distributed operating system kernel.

During the early formative years of the Internet, Spafford made significant contributions to establishing semi-formal processes to organize and manage Usenet, then the primary channel of communication between users, as well as being influential in defining the standards of behavior governing its use.

Prof. Shiu-Kai Chin, "Logic Design for Access Control,..."

Prof. Shiu-Kai Chin
Logic Design for Access Control, Security, Trust, and Assurance,
Prof. Shiu-Kai Chin,
Syracuse University, USA
Senior Scientist, Serco-NA, Inc.
Time: 11:20am - 11:50am
Location: Gold Room

Abstract:

Designers of hardware and software are frequently doing their work as part of larger systems where the confidentiality, integrity, and availability of information and other resources is a primary concern. Whether the system is a military one where assurance of mission critical capabilities is paramount, or the system is for financial services where assurances of integrity of financial data and transactions are crucial, the concerns are still the same. What are the access control and security policies and mechanisms that permit or deny the use of a resource or capability? Trust—who or what is believed and under what circumstances? Assurance—how do we know that what is proposed or implemented makes sense and is justifiable? Pundits for decades have said —security must be designed into a system from the very beginning!— Nevertheless, until front line and newly graduated engineers are able to design and reason about security in similar ways that they use logic to design hardware, the security of our systems will remain poor. This paper describes a logic for reasoning about access control, security, and trust with design and verification engineers in mind. The logic and its accompanying formal verification methods span the full range of abstraction from hardware, firmware, and operating systems, through middleware, networks, protocols, security policies, and concepts of operations. We have used this logic to describe and verify the concept of operations for interoperable credentials for ultra high-value commercial transactions with JP Morgan Chase. We are using this logic and its associated verification tools for assuring the integrity and security of command and control for Air Force operations. The logic and methods have been taught to over 226 junior and senior ROTC cadets from over forty US universities as part of the Air Force Research Laboratory's Advanced Course in Engineering Cyber Security Boot Camp. It has also been taught to over twenty active-duty Air Force officers and DoD contractors. This paper also touches on our experience applying our methods to real products and our students' experiences learning these methods.

Bio:

Shiu-Kai Chin is a Professor in the Department of Electrical Engineering and Computer Science at Syracuse University, Syracuse, New York. His research applies mathematical logic to the engineering of trustworthy systems. He and Susan Older are authors of the textbook Access Control, Security, and Trust: A Logical Approach, CRC Press 2011. His research is used by JP Morgan Chase and by the Air Force Research Laboratory. He is a member of the National Institute of Justice's Electronic Crime Technical Working Group. Shiu-Kai is Director of the Center for Information and Systems Assurance and Trust. He has served as interim dean of the L.C. Smith College of Engineering and Computer Science from 2006-2008 and Computer Engineering Program Director from 2002-2006. In 1997 he was appointed Laura J. and L. Douglas Meredith Professor for Teaching Excellence. Prior to joining SU, Shiu-Kai was a senior engineer and program manager at General Electric. During his eleven years at GE, his designs were part of several products including a nuclear fuel-rod monitor, a memory manager for a heart imaging system similar to a CAT scanner, and a custom radiation-hard CMOS processor combined with a GaAs C-band transceiver for controlling phased-array radars. He is a graduate of GE's Advanced Course in Engineering.

Prof. Ryan Kastner, "Bit-Tight Design: A Novel Design Methodology for..."

Prof. Ryan Kastner
Bit-Tight Design: A Novel Design Methodology for Reconfigurable Systems,
Prof. Ryan Kastner,
University of California San Diego, CA, USA
Time: 10:00 - 10:20am
Location: Gold Room

Abstract:

Trusted systems fundamentally rely on the ability to tightly control the flow of information both in-to and out-of the device. Due to their inherent programmability, reconfigurable systems are riddled with security holes (timing channels, undefined behaviors, storage channels, backdoors) which can be used as a foothold for attackers to strike. System designers are constantly forced to respond to these attacks, often only after significant damage has been inflicted. We propose to use the reconfigurable nature of the system to our advantage by creating a new hardware foundation for secure computing which will carefully and precisely manage all flows of information, making them explicit and verifiable from the hardware logic gates all the way up the system stack. This can be used to ensure private keys are never leaked (for secrecy), and that untrusted information will not used in the making of critical decisions (for safety and fault tolerance) nor in determining the schedule (real-time).

Bio:

Ryan Kastner is an associate professor in the Department of Computer Science and Engineering at the University of California, San Diego. He received a PhD in Computer Science (2002) at UCLA, a masters degree in engineering (2000) and bachelor degrees (BS) in both Electrical Engineering and Computer Engineering (1999), all from Northwestern University. He spent the first five years after his PhD as a professor in the Department of Electrical and Computer Engineering at the University of California, Santa Barbara.

Professor Kastner's current research interests reside in the realm of embedded system design, in particular, the use of reconfigurable computing devices for digital signal processing as well as hardware security. He has published over 100 technical articles, and has authored three books, "Synthesis Techniques and Optimizations for Reconfigurable Systems", "Arithmetic Optimizations for Polynomial Expressions and Linear Systems" and "Handbook on FPGA Design Security". He has served as member of numerous conference technical committees spanning topics like reconfigurable computing (ISFPGA, FPL, FPT, ERSA, RAW, ARC), electronic design automation (DAC, ICCAD, DATE, ICCD, GLSVLSI), wireless communication (GLOBECOM, SUTC), hardware security (HOST) and underwater networking (WUWNet). He serves on the editorial board for the IEEE Embedded Systems Letters.

Prof. Jürgen Teich, "Verifying the Authorship of Embedded IP Cores:..."

Prof. Jürgen Teich
Verifying the Authorship of Embedded IP Cores: Watermarking and Core Identification Techniques,
Prof. Jürgen Teich and Dr. Daniel Ziener,
University of Erlangen-Nuremberg, Germany
Time: 10:40am - 11:10am
Location: Gold Room

Abstract:

In this paper, we present an overview of new watermarking and identification techniques for FPGA IP cores. Unlike most existing watermarking techniques, the focus of our techniques lies on ease of verification, even if the protected cores are embedded into a product. Moreover, we have concentrated on higher abstraction levels for embedding the watermark, particularly at the logic level, where IP cores are distributed as netlist cores. With the presented watermarking methods, it is possible to watermark IP cores at the logic level and identify them with a high likelihood and in a reproducible way in a purchased product from a company that is suspected to have committed IP fraud. The investigated techniques establish the authorship by verification of either an FPGA bitfile or the power consumption of a given FPGA.

Bio:

Jürgen Teich received his masters degree (Dipl.-Ing.) in 1989 from the University of Kaiserslautern (with honours). From 1989 to 1993, he was PhD student at the University of Saarland, Saarbruecken, Germany from where he received his PhD degree (summa cum laude). His PhD thesis entitled `A Compiler for Application-Specific Processor Arrays' summarizes his work on extending techniques for mapping computation intensive algorithms onto dedicated VLSI processor arrays. In 1994, Dr. Teich joined the DSP design group of Prof. E. A. Lee and D.G. Messerschmitt in the Department of Electrical Engineering and Computer Sciences (EECS) at UC Berkeley where he was working in the Ptolemy project (PostDoc).

From 1995 to 1998, he held a position at Institute of Computer Engineering and Communications Networks Laboratory (TIK) at ETH Zurich, Switzerland, finishing his Habilitation entitled `Synthesis and Optimization of Digital Hardware/ Software Systems' in 1996.

From 1998 to 2002, he was full professor (C4) in the Electrical Engineering and Information Technology department of the University of Paderborn, holding a chair in Computer Engineering.

Since 2003, he is appointed full professor (C4) in the Department of Computer Science of the Friedrich-Alexander University Erlangen-Nuremberg holding a chair in Hardware/Software Co-Design. Dr. Teich has been a member of multiple program committees of well-known conferences and workshops. He is a Senior Member of the IEEE and author of a textbook on Co-Design edited by Springer in 2007. His research interests are massive parallelism, embedded systems, Co-Design, and computer architecture.

Since 2004, Prof. Teich is also an elected reviewer for the German Research Foundation (DFG) for the area of Computer Architecture and Embedded Systems. Prof. Teich is involved in many interdisciplinary national basic research projects as well as industrial projects. He is supervising more than 30 PhD students currenty. Since 2010, he is also the coordinator of the Transregional Research Center 89 on Invasive Computing funded by the DFG.

Bio

Dr. Daniel Ziener

Daniel Ziener took his university entrance qualification in 1998. He received his diploma degree (Dipl.-Ing. (FH)) in Electrical Engineering from University of Applied Science Aschaffenburg, Germany, in August 2002. Beside his studies, he gained industrial research experience during an internship at the IBM Germany Development Labs in Böblingen. From 2003 to 2009 he worked for the Fraunhofer Institute of Integrated Circuits (IIS) in Erlangen, Germany as a research staff in the electronic imaging department. Furthermore, in 2003 he joined the Chair of Hardware-Software-Co-Design at the University of Erlangen-Nuremberg, Germany, headed by Prof. Jürgen Teich as PhD student. In 2010 he received his PhD degree (Dr.-Ing.). The title of his PhD thesis is: “Techniques for Increasing Security and Reliability of IP Cores Embedded in FPGA and ASIC Designs”. Since 2010 he leads the Reconfigurable Computing Group at the Chair of Hardware-Software-Co-Design at the University of Erlangen-Nuremberg.

His main research interests are IP core watermarking, efficient usage of FPGA structures, design of signal processing FPGA cores, and reliable and fault tolerant embedded systems. Daniel Ziener has been a reviewer for several international conferences, for the IET Journal on Computers & Digital Techniques, the IEEE Transactions on Very Large Scale Integration Systems, and the Elsevier Journal for Microprocessors and Microsystems.

Prof. Tim Güneysu, "High-Performance Cryptography"

Prof. Tim Güneysu
Establishing Dedicated Functions on Recent FPGA Devices for High-Performance Cryptography,
Prof. Tim Güneysu,
Applied Data Security and Cryptography, Ruhr University Bochum, Germany
Time: 09:20 - 09:40am
Location: Gold Room

Abstract:

This work presents a unique design approach for the implementation of standardized symmetric and asymmetric cryptosystems on modern FPGA devices. In contrast to many other FPGA implementations that algorithmically optimize the cryptosystems for being optimally placed in the generic array logic, our primary implementation goal is to shift as many cryptographic operations as possible into specific function hardcores that have become available on many reconfigurable devices. For example, some of these dedicated functions are designed to provide large blocks of memory or fast arithmetic functions for Digital Signal Processing applications that can also be adopted for efficient cryptographic implementations. Based on these dedicated functions, we present specific design approaches that enable a performance for the symmetric AES block cipher (FIPS 197) of up to 55 GBit/s and a throughput of more than 30.000 scalar multiplications per second for asymmetric Elliptic Curve Cryptography over NIST’s P-224 prime (FIPS 186-3).

Bio:

Tim Güneysu is an assistant professor leading the research group “Hardware Security” at Ruhr University of Bochum in Germany. Major research topics of his group are cryptographic and cryptanalytic implementations and systems, in particular involving reconfigurable devices. With respect to cryptographic implementations, he established and still holds performance records for the symmetric AES-128 block cipher on Xilinx Virtex-5 FPGAs and asymmetric Elliptic Curve Cryptography standards on Xilinx Virtex-4 FPGAs. He co-developed the COPACOBANA FPGA cluster system in cooperation with the University of Kiel that was originally designed to tackle the vast amount of complex computations for a large variety of cryptanalytic applications. Due the great success of this unique cluster system, it is now sold world-wide by the spin-off company Sciengines GmbH.

Prof. Güneysu contributed and published more than 35 journal and conference publications in the area of IT security and cryptography (e.g., IEEE Transactions on Computers, Journal of Cryptographic Engineering, IACR CHES).

Dr. Kimmo Järvinen, "High-Performance Implementation of ..., "

Dr. Kimmo Järvinen
High Performance Implementation of Elliptic Curve Cryptography with Reconfigurable Hardware,
Dr. Kimmo Järvinen,
Aalto University, Finland

Time: 09:40 - 10:00am
Location: Gold Room

Abstract:

This work presents a highly optimized FPGA-based implementation of elliptic curve cryptography. The work relies on the state-of-the-art algorithms and implementation techniques. Contrary to many other published elliptic curve processors, the implementation fully utilizes the possibilities offered by reconfigurability by optimizing all hierarchical levels of elliptic curve operations for a specific elliptic curve and parameters which ensures the best possible performance. Support for other curves and parameters is achieved through reconfiguration. The results of the work show that one modern FPGA chip is capable of performing over 1,000,000 scalar multiplications per second on a standardized 163-bit elliptic curve.

Bio:

Kimmo Järvinen received M.Sc. (Tech.) and D.Sc. (Tech.) degrees from Helsinki University of Technology in 2003 and 2008, respectively. His doctoral thesis was entitled "Studies on high-speed hardware implementation of cryptographic algorithms".

Dr. Järvinen is currently a postdoctoral researcher in the cryptography group of Department of Information and Computer Science at Aalto University in Espoo, Finland. He has been working in a European Union 7th Framework Programme project Computer Aided Cryptography Engineering (CACE) in 2008-2010. Starting from the beginning of 2011, he will have a three-year postdoctoral researcher's project funded by Academy of Finland. The project will study methods to implement cryptographic algorithms efficiently in environments where exist strict constraints on performance (speed), resource requirements, and resistivity against side-channel attacks.

N/A

Creating Science for Cyber-Security,
Chairs: Prof. Shiu-Kai Chin and Prof. William L. Harrison

Creating the Science & Engineering for Cyber-Security,

Prof. Shiu-Kai Chin
Chair:
Prof. Shiu-Kai Chin,
Syracuse University, USA
Senior Scientist, Serco-NA, Inc.

Bio:

Shiu-Kai Chin is a Professor in the Department of Electrical Engineering and Computer Science at Syracuse University, Syracuse, New York. His research applies mathematical logic to the engineering of trustworthy systems. He and Susan Older are authors of the textbook Access Control, Security, and Trust: A Logical Approach, CRC Press 2011. His research is used by JP Morgan Chase and by the Air Force Research Laboratory. He is a member of the National Institute of Justice's Electronic Crime Technical Working Group. Shiu-Kai is Director of the Center for Information and Systems Assurance and Trust. He has served as interim dean of the L.C. Smith College of Engineering and Computer Science from 2006-2008 and Computer Engineering Program Director from 2002-2006. In 1997 he was appointed Laura J. and L. Douglas Meredith Professor for Teaching Excellence. Prior to joining SU, Shiu-Kai was a senior engineer and program manager at General Electric. During his eleven years at GE, his designs were part of several products including a nuclear fuel-rod monitor, a memory manager for a heart imaging system similar to a CAT scanner, and a custom radiation-hard CMOS processor combined with a GaAs C-band transceiver for controlling phased-array radars. He is a graduate of GE's Advanced Course in Engineering.


Prof. William L. Harrison
Chair:
Prof. William L. Harrison,
University of Missouri, USA

Bio:

William Harrison is an associate professor in the Department of Computer Science at the University of Missouri in Columbia, Missouri. His research applies structures and techniques from programming languages research towards the construction of high assurance, secure systems. Dr Harrison is the director of the newly formed Center for High Assurance Computing at the University of Missouri and was instrumental in achieving its recognition as a National Security Agency Center of Academic Excellence. Dr Harrison received his BA in Mathematics from Berkeley in 1986 and his doctorate from the University of Illinois at Urbana-Champaign in 2001 in Computer Science. From 2000-2003, he was a post-doctoral research associate at the Oregon Graduate Institute in Portland, Oregon where he was a member of the Programatica project. Dr Harrison joined the Department of Computer Science at the University of Missouri in 2003 and received the NSF CAREER award from the CyberTrust program in 2008. His research interests include language-based computer security, trustworthy

Abstract

Securing the Internet, electronic databases, protocols, financial services, telecommunications networks, power grids, military systems, and cyber-physical systems in general, is a widespread and growing concern. One major challenge is cyber-space is synthetic and only partially tied to the physical world. However, security failures at the hardware level, e.g., failure to secure physical memory, have potentially devastating consequences at all higher levels of abstraction.

Another major challenge is cyber-space is a contested environment. Systems must be secured from attacks. The synthetic nature of cyber-space provides few constraints on attackers or defenders of systems operating in cyber-space. As of now, no single area of physical science, social science, or mathematics has emerged as the fundamental scientific basis for the engineering of secure cybersystems.

Recently, the US Department of Defense has called for the creation of a science of cyber-security to increase the understanding of cyber-security to improve the security and integrity of systems. This is described in the so-called JASON Report, [JASON]. The proposed ERSA'11 session will describe some of the research of government, academic, and industrial researchers are doing to provide a rigorous basis for the science and engineering of secure cyber-systems.

Papers

Logic Design for Access,
Prof. Shiu-Kai Chin

Prof. Shiu-Kai Chin
Logic Design for Access Control, Security, Trust, and Assurance,
Prof. Shiu-Kai Chin,
Syracuse University, USA
Senior Scientist, Serco-NA, Inc.
Time: 11:20am - 11:50am
Location: Gold Room

Abstract:

Designers of hardware and software are frequently doing their work as part of larger systems where the confidentiality, integrity, and availability of information and other resources is a primary concern. Whether the system is a military one where assurance of mission critical capabilities is paramount, or the system is for financial services where assurances of integrity of financial data and transactions are crucial, the concerns are still the same. What are the access control and security policies and mechanisms that permit or deny the use of a resource or capability? Trust—who or what is believed and under what circumstances? Assurance—how do we know that what is proposed or implemented makes sense and is justifiable? Pundits for decades have said —security must be designed into a system from the very beginning!— Nevertheless, until front line and newly graduated engineers are able to design and reason about security in similar ways that they use logic to design hardware, the security of our systems will remain poor. This paper describes a logic for reasoning about access control, security, and trust with design and verification engineers in mind. The logic and its accompanying formal verification methods span the full range of abstraction from hardware, firmware, and operating systems, through middleware, networks, protocols, security policies, and concepts of operations. We have used this logic to describe and verify the concept of operations for interoperable credentials for ultra high-value commercial transactions with JP Morgan Chase. We are using this logic and its associated verification tools for assuring the integrity and security of command and control for Air Force operations. The logic and methods have been taught to over 226 junior and senior ROTC cadets from over forty US universities as part of the Air Force Research Laboratory's Advanced Course in Engineering Cyber Security Boot Camp. It has also been taught to over twenty active-duty Air Force officers and DoD contractors. This paper also touches on our experience applying our methods to real products and our students' experiences learning these methods.

Bio:

Shiu-Kai Chin is a Professor in the Department of Electrical Engineering and Computer Science at Syracuse University, Syracuse, New York. His research applies mathematical logic to the engineering of trustworthy systems. He and Susan Older are authors of the textbook Access Control, Security, and Trust: A Logical Approach, CRC Press 2011. His research is used by JP Morgan Chase and by the Air Force Research Laboratory. He is a member of the National Institute of Justice's Electronic Crime Technical Working Group. Shiu-Kai is Director of the Center for Information and Systems Assurance and Trust. He has served as interim dean of the L.C. Smith College of Engineering and Computer Science from 2006-2008 and Computer Engineering Program Director from 2002-2006. In 1997 he was appointed Laura J. and L. Douglas Meredith Professor for Teaching Excellence. Prior to joining SU, Shiu-Kai was a senior engineer and program manager at General Electric. During his eleven years at GE, his designs were part of several products including a nuclear fuel-rod monitor, a memory manager for a heart imaging system similar to a CAT scanner, and a custom radiation-hard CMOS processor combined with a GaAs C-band transceiver for controlling phased-array radars. He is a graduate of GE's Advanced Course in Engineering.

The Nature of Cyber Security
Prof. Eugene Howard Spafford,

ERSA/WORLDCOMP KEYNOTE TALK

Prof. Eugene Howard Spafford
The Nature of Cyber Security,
Prof. Eugene Howard Spafford,
Purdue University, USA
Leading computer security expert
Time: 09:50 - 10:45am
Location: Lance Burton Theater

Abstract:

There is an on-going discussion about establishing a scientific basis for cyber security. Efforts to date have often been ad hoc and conducted without any apparent insight into deeper formalisms. The result has been repeated system failures, and a steady progression of new attacks and compromises.

A solution, then, would seem to be to identify underlying scientific principles of cyber security, articulate them, and then employ them in the design and construction of future systems. This is at the core of several recent government programs and initiatives.

But the question that has not been asked is if “cyber security” is really the correct abstraction for analysis. There are some hints that perhaps it is not, and that some other approach is really more appropriate for systematic study — perhaps one we have yet to define.

In this talk I will provide some overview of the challenges in cyber security, the arguments being made for exploration and definition of a science of cyber security, and also some of the counterarguments. The goal of the presentation is not to convince the audience that either viewpoint is necessarily correct, but to suggest that perhaps there is sufficient doubt that we should carefully examine some of our assumptions about the field.

Bio:

Eugene Howard Spafford is a Professor in the Purdue University. He is historically significant Internet figure, he is renowned for first analyzing the Morris Worm, one of the earliest computer worms, and his prominent role in the Usenet backbone cabal. Spafford was a member of the President's Information Technology Advisory Committee 2003-2005,[2] has been an advisor to the National Science Foundation (NSF), and serves as an advisor to over a dozen other government agencies and major corporations.

Spafford attended State University of New York at Brockport for three years and completed his B.A. with a double major in mathematics and computer science in that time. He then attended the School of Information and Computer Sciences (now the College of Computing) at the Georgia Institute of Technology. He received his M.S. in 1981, and Ph.D. in 1986 for his design and implementation of the original Clouds distributed operating system kernel.

During the early formative years of the Internet, Spafford made significant contributions to establishing semi-formal processes to organize and manage Usenet, then the primary channel of communication between users, as well as being influential in defining the standards of behavior governing its use.

Science of Mission Assurance,
Dr. Sarah Muccio

Dr. Sarah Muccio
Science of Mission Assurance,
Dr. Sarah Muccio,
Air Force Research Laboratory, USA
Time: 02:00 - 02:20pm
Location: Gold Room

Abstract:

Engineers design and build artifacts - bridges, sewers, cars, airplanes, circuits, software -- for human purposes. ...

Bio:

Dr. Sarah L. Muccio (BS Mathematics, Summa Cum Laude, Youngstown State University; MS, PhD Applied Mathematics, North Carolina State University) is a mathematician for the Cyber Science Branch of the Information Directorate, Air Force Research Laboratory, Rome, NY. In the field of information assurance, Dr. Muccio works with scientists to mathematically model systems and analyze information. She conducts research on emerging technologies and maps mission essential functions to their cyber assets. Dr. Muccio enjoys educating future cyber security leaders through several Syracuse University graduate courses that she co-created as well as the Advanced Course in Engineering (ACE) program.

Grounding Trust,
Prof. Cynthia Irvine

Prof. Cynthia Irvine
Grounding Trust
Prof. Cynthia Irvine,
Naval Postgraduate School, USA
Time: 11:50am - 12:20pm
Location: Gold Room

Abstract:

The word “security” is often defined as “the freedom from danger” or “the freedom from doubt and fear.” Life is dangerous, and, as sentient beings, we experience varying levels of doubt and fear regarding the dangers to which we are exposed. Through a better understanding of the world, our response to danger has evolved from witchcraft and voodoo to science-based measures for which there is a reasonable confidence of protection against danger. Recent discussions and reports have indicated a need for a "science of security." If such science is needed, one is lead to the ineluctable conclusion that some portion of what currently postures as computer and network security is nothing more than technical witchcraft and voodoo. Here we will explore the continuum between superstition and reason in the context of the components and methods used to construct trustworthy systems.

Bio:

Dr. Cynthia Irvine is the Director of the Center for Information Systems Security Studies and Research (CISR) and a Professor of Computer Science at the Naval Postgraduate School, where she has worked since 1994. In December 2010, she was appointed chair of the Cyber Security and Operations Academic Committee at the Naval Postgraduate School. Her research centers on the design and construction of high assurance systems and multilevel security. She is an author on over 160 papers and reports on security and has supervised the research of over 120 Masters and PhD students. She has served on numerous government committees and review boards. Dr. Irvine is a recipient of the Naval Information Assurance Award. In 2004, she received the William Hugh Murray Founder’s Award from the Colloquium for Information Systems Security Education. She is a member of the ACM, a lifetime member of the ASP, and a Senior Member of the IEEE. From 2005 through 2009, she served as Vice-Chair and subsequently as Chair of the IEEE TC on Security and Privacy.

Information Flow and and Noninterference-style Security,
Dr. Gerard Allwein

Dr. Gerard Allwein
Information Flow and Noninterference-style Security
Dr. Gerard Allwein,
Naval Research Laboratory, USA

Time: 01:20 - 01:40pm
Location: Gold Room

Abstract:

It has long been held that information flow security models should be organized with respect to a theory of information, but typically they are not. The appeal of a information-theoretic foundation for information flow security seems natural, compelling and, indeed, almost tautological. I will describe how channel theory---a theory of information based in logic---can provide a basis for noninterference style security models. One can build simple logic systems for reasoning about shared resources, or more complicated systems combining Goguen- Meseguer non-interference with Floyd-Hoare semantics to yield Gries-Owicky semantics for detecting parallel program interactions. Channel theory can also be married to monads from functional programming to build systems where the security argument tracks the monadic structure of the program. The evidence presented here suggests that channel theory is a useful organizing principle for information flow security.

Bio:

Dr. Gerard Allwein is an algebraic logician with an undergrad degree in Computer Science from Purdue U. and PhD from Indiana U. He studies non-standard logics with applications to security. He is also interested in information theories which can be used to link logics together. A principle example of these kinds of linkages is channel theory. He uses these tools to analyze security issues in the use of FPGAs. Upon graduation from IU, he spent 10 years as Jon Barwise's Assistant Director of the Visual Inference Laboratory. He is currently a logician at the Naval Research Laboratory in Washington D.C.

"It Takes a Village" (to create a science): From crypto science to security science,
Dr. Steven Borbash

Dr. Steven Borbash
It Takes a “Village” (to create a science): From crypto science to security science
Dr. Steven Borbash ,
National Security Agency, USA
Time: 01:40 - 02:00pm
Location: Gold Room

Abstract:

Government funding for the solutions to security-related problems has increased significantly in the last decade, along with interest in these problems from researchers from many fields. Much of the funding has been in near-term applications to specific systems (health care records; automotive security; remote controls; military applications, etc.), while there has not been enough in support of more general solutions. Many people have agreed recently that it would be useful to incentivize more theory or "basic science" to support security. This presentation will review current government efforts to build stronger scientific foundations for cyber security.

Bio:

Steven Borbash is a Senior Researcher in Information Assurance at the National Security Agency. He has worked on problems of communications and computer security for more than twenty years, concentrating in wireless and mobile problems. He was a B. S. Physics from the University of Toronto, and a M.S. in Applied Mathematics from the University of Maryland Baltimore County. He received the PhD in Electrical and Computer Engineering from the University of Maryland College Park in 2004 for work on the fundamentals of wireless networking. His graduate advisor was Anthony Ephremides. Dr. Borbash won the US Intelligence Community's 2010 Galileo Award for an application of economic ideas to information security problems.

The Confluence of Secure Hardware and Programming Languages,
Chair Prof. William L. Harrison

RC + LBS: The Confluence of Secure Hardware and Programming Languages,

Prof. William L. Harrison,
Chair:
Prof. William L. Harrison,
University of Missouri, USA

Bio:

William Harrison is an associate professor in the Department of Computer Science at the University of Missouri in Columbia, Missouri. His research applies structures and techniques from programming languages research towards the construction of high assurance, secure systems. Dr Harrison is the director of the newly formed Center for High Assurance Computing at the University of Missouri and was instrumental in achieving its recognition as a National Security Agency Center of Academic Excellence. Dr Harrison received his BA in Mathematics from Berkeley in 1986 and his doctorate from the University of Illinois at Urbana-Champaign in 2001 in Computer Science. From 2000-2003, he was a post-doctoral research associate at the Oregon Graduate Institute in Portland, Oregon where he was a member of the Programatica project. Dr Harrison joined the Department of Computer Science at the University of Missouri in 2003 and received the NSF CAREER award from the CyberTrust program in 2008. His research interests include language-based computer security, trustworthy

Abstract

Generating hardware from high-level languages is an active area of research within reconfigurable computing (RC). One motivation for this is to facilitate the hardware design and synthesis by leveraging “power” tools and techniques from software engineering. Some of these software power tools include high-level abstractions and modularity, static analysis, and highly optimizing compilers.

A parallel development in programming languages research is language-based security (LBS). LBS seeks to apply tools and techniques from languages research to the development of secure software and systems. One common LBS approach, for example, uses static analyses and type systems to enforce or detect insecure flows in programs.

This session considers the potential integration of ideas from reconfigurable computing and language-based security into a scientific and engineering foundation for trustworthy and secure hardware systems. In particular, a non-inclusive list of potential subjects for this session is:

  • language paradigms (functional, domain-specific, etc.) for codesign,
  • automated testing and formal verification techniques, and
  • model-driven design and synthesis of hardware
  • security models specialized to hardware systems

Papers

3-D Extensions for Trustworthy Systems,
Dr. Ted Huffmire

Dr. Ted Huffmire
3-D Extensions for Trustworthy Systems
Ted Huffmire, Timothy Levin, Cynthia Irvine, Ryan Kastner, and Timothy Sherwood,
Naval Postgraduate School in Monterey, California, USA
Time: 08:10 - 08:30am
Location: Gold Room

Abstract:

Developing high assurance systems is costly. Trustworthy system development entails a high non-recurring engineering (NRE) cost together with a low volume of units over which to amortize that cost. This results in an increasing gap between systems that meet DoD requirements and those that are available to consumers. Also, the potential for developmental and operational attacks against hardware requires countermeasures that make it very expensive to design and manufacture the custom hardware used to build high assurance systems. To address these problems, we propose an approach to trustworthy system development based on 3-D integration, an emerging chip fabrication technique in which two or more dies are fabricated individually and then combined into a single stack using vertical conductive posts. With 3-D integration, a general-purpose die, or computation plane, can be combined with a special-purpose die, or control plane. Our approach has the potential to reduce the cost of developing hardware for high assurance systems by joining a mass-produced computation plane with a custom control plane. Our approach provides several advantages, including

  1. dual use of the computation plane, which can be optionally combined with a control plane housing application-specific security functions;
  2. physical isolation and logical disentanglement of security functions in the control plane (from the non-security circuitry in the computation plane);
  3. controlled lineage (e.g., use of a trusted foundry to manufacture the control plane);
  4. high bandwidth communication and low latency between the computation plane and components in the control plane such as coprocessors, memory, or other devices; and
  5. direct, granular access by the control plane to chip features in the computation plane.

In the following, we discuss the security advantages of using 3-D integrated hardware in sensitive applications, where security is of the utmost importance, and we outline problems, challenges, attacks, solutions, and topics for future research.

Bio:

Ted Huffmire is an Assistant Professor of Computer Science at the Naval Postgraduate School in Monterey, California. His research spans both computer security and computer architecture, focusing on hardware-oriented security and the development of policy enforcement mechanisms for application-specific devices. He has a Ph.D. in Computer Science from the University of California, Santa Barbara. He is a member of the IEEE and the ACM.

Declarative FPGA Circuit Synthesis using Kansas Lava
Dr. Andrew Gill

Dr. Andrew  Gill
Declarative FPGA Circuit Synthesis using Kansas Lava,
Dr. Andrew Gill,
University of Kansas, USA

Time: 08:30 - 08:50am
Location: Gold Room

Abstract:

Designing and debugging hardware components is challenging, especially when performance requirements demands a complex orchestra of cooperating and highly synchronized computation engines. New language-based solutions to this problem have the potential to revolutionize how we think about and build circuits. In this paper, we describe our language-based approach to semi-formal co-design. Using examples, we will show how the worker/wrapper transformation and other refinement techniques can be used to take concise descriptions of specialized computation, and generate efficient circuits. Kansas Lava, our high-level hardware description language built on top of the functional language Haskell, acts as a bridge between these computational descriptions and synthesizable VHDL. Central to the whole approach is the use of Haskell types to express communication and timing choices between computational components. Design choices and engineering compromises during co-design become type-centric refinements, encouraging architectural exploration.

Bio:

Andrew (Andy) Gill was born and educated in Scotland, and has spent his professional career in the United States. Andy received his Ph.D. from the University of Glasgow in 1996, then spent three years in industry as a compiler developer, and a year in academia as a principal project scientist. He co-founded Galois in 2000, a technology transfer company that used language technologies to create trustworthiness in critical systems. In 2008 he returned to academia and research, joining the University of Kansas and the Information and Telecommunication Technology Center.

Andy believes that functional languages like Haskell are a great medium for expressing algorithms and solving problems. Since returning to academia, he has targeted the application area of telemetry, specializing in generating circuits from specifications. His research interests include optimization, language design, debugging, and dependability. The long-term goal of his research is to offer software engineers and functional language practitioners the opportunity to write clear and high-level executable specifications that can realistically be compiled into efficient implementations.

Towards Semantics-directed System Design and Synthesis,
Prof. William L. Harrison

Prof. William L. Harrison
Towards Semantics-directed System Design and Synthesis,
Prof. William L. Harrison,
University of Missouri, USA
Time: 08:50 - 09:10am
Location: Gold Room

Abstract:

Synthesis of hardware from high level programming languages is a hot topic within reconfigurable computing. A recent trend in languages research, language-based security, applies ideas from programming language design and implementation to the challenge of generating secure software. Can ideas and techniques from language-based security be combined with HLL synthesis towards the generation of high assurance, secure hardware? Are some languages better than others for trustworthy and secure systems? Are some language features more appropriate to HLL synthesis than others? Can languages be designed that are sufficiently high-level (i.e., supporting formal verification) while not sacrificing efficient translation into hardware?

Functional languages have been proposed as a means of addressing these challenges and, in this talk, we discuss our application of a form of functional programming---what we call "monadic programming"---to the generation of high assurance and secure systems. We attempt to address these challenges through the design of a domain-specific functional language that is sufficiently expressive to capture essential system behaviors and yet still semantically simple enough to support formal verification. The resulting language capitalizes on an established tool of functional programming, the monad, to express purely functional programs in a nearly-imperative style. The research described in this talk uses monadic programming as a flexible, modular organizing principle for secure system design and implementation.

Bio:

William Harrison is an associate professor in the Department of Computer Science at the University of Missouri in Columbia, Missouri. His research applies structures and techniques from programming languages research towards the construction of high assurance, secure systems. Dr Harrison is the director of the newly formed Center for High Assurance Computing at the University of Missouri and was instrumental in achieving its recognition as a National Security Agency Center of Academic Excellence. Dr Harrison received his BA in Mathematics from Berkeley in 1986 and his doctorate from the University of Illinois at Urbana-Champaign in 2001 in Computer Science. From 2000-2003, he was a post-doctoral research associate at the Oregon Graduate Institute in Portland, Oregon where he was a member of the Programatica project. Dr Harrison joined the Department of Computer Science at the University of Missouri in 2003 and received the NSF CAREER award from the CyberTrust program in 2008. His research interests include language-based computer security, trustworthy

Runtime Adaptive Embedded Systems and Architectures,
Chair Prof. Roman Lysecky

Runtime Adaptive Embedded Systems and Architectures,

Prof. Roman Lysecky
Chair:
Prof. Roman Lysecky,
University of Arizona, USA

Bio:

Roman Lysecky is an Assistant Professor of Electrical and Computer Engineering at the University of Arizona. He received his B.S., M.S., and Ph.D. in Computer Science from the University of California, Riverside in 1999, 2000, and 2005, respectively. His primary research interests focus on embedded systems design, with emphasis on dynamic adaptability, hardware/software partitioning, field-programmable gates arrays (FPGAs), and low-power methodologies. He has coauthored two textbooks on hardware description languages, published dozens of research papers in top journals and conferences, and holds one US patent. He received a CAREER award from the National Science Foundation in 2009, Best Paper Awards from the International Conference on Hardware-Software Codesign and System Synthesis (CODES+ISSS) and the Design Automation and Test in Europe Conference (DATE), and an Outstanding Ph.D. Dissertation Award from the European Design and Automation Association (EDAA) in 2006.

Abstract

As complexity of embedded applications grows, predicting the dynamic execution behavior of embedded systems is increasingly challenging. Additionally, the environment in which a device is deployed and data being processed can also significantly affect the execution. As a result, static optimization methods may produce non-optimal design configurations, whereas runtime adaptive systems offer opportunities for optimization the systems configuration and application execution utilizes runtime information. In this special session, several talks will address aspects of design, optimization, and application of runtime adaptive systems.

Papers

iCore: A Run-time Adaptive Processor for Embedded Multi-core Systems,
Prof. Jörg Henkel

Prof. Jörg Henkel
iCore: A Run-time Adaptive Processor for Embedded Multi-core Systems,
Prof. Jörg Henkel, Lars Bauer and Artjom Grudnitsky,
Karlsruhe Institute of Technology, Germany
Time: 09:50am - 10:20am
Location: Gold Room

Abstract:

We present the iCore (invasive core), an application specific instruction set processor (ASIP) with a run-time adaptive instruction set. Its adaptivity is controlled by the run-time system with respect to application properties that may vary during run-time. A reconfigurable fabric hosts the adaptive part of the instruction set whereas the rest of he instruction set is fixed. We show that the iCore is particularly beneficial in an embedded multi-core system where it performs applications-specific as well as system-specific tasks. The advantages are demonstrated by means of multi-media applications.

Bio:

Professor Jörg Henkel is currently with Karlsruhe Institute of Technology (KIT), Germany, where he is directing the Chair for Embedded Systems CES. Before, he was with NEC Laboratories in Princeton, NJ. His current research is focused on design and architectures for embedded systems with focus on low power and reliability. Prof. Henkel has organized various embedded systems and low power ACM/IEEE conferences/ symposia as General Chair and Program Chair and was a Guest Editor on these topics in various Journals like the IEEE Computer Magazine. He was Program Chair of CODES'01, RSP'02, ISLPED/06, SIPS'08 and CASES'09 and served as General Chair for CODES'02 and ISLPED 2009. He is/has been a steering committee member of major conferences in the embedded systems field like at ICCAD, Codes+ISSS and is also an editorial board member of various journals like the IEEE TVLSI, JOLPE etc. He has given full/half-day tutorials at leading conferences like DAC, ICCAD, DATE etc. Prof. Henkel received the 2008 DATE Best Paper Award and the 2009 IEEE/ACM William J. Mc Calla ICCAD Best Paper Award. He is the Chairman of the IEEE Computer Society, Germany Section, and the Editor-in-Chief of the ACM Transactions on Embedded Computing Systems (ACM TECS). He is an initiator and the spokes person of the German national program on 'Dependable Embedded Systems' (SPP 1500) funded by the German Science Foundation (DFG). He holds ten US patents.

Advanced Profiling of Applications for Heterogeneous Multi Core Platforms
Prof. Koen Bertels

Prof. Koen Bertels
Advanced Profiling of Applications for Heterogeneous Multi Core Platforms,
Prof. Koen Bertels, Arash Ostadzadeh, Roel Meeuws
Delft University, The Netherlands

Time: 10:40 - 11:00am
Location: Gold Room

Abstract:

The increased complexity of programming on multi-processors platforms requires more insight into program behavior for which programmers need increasingly sophisticated methods for profiling, instrumentation, measurement, analysis, and modeling of applications. Particularly, tools to thoroughly investigate the memory access behavior of applications become crucial due to the widening gap between the memory bandwidth/latency compared to the processing performance. To address these challenges, we developed a profiling framework in the context of the DWB, which is a semi automatic tool platform for integrated hardware/software co-design, targeting heterogeneous computing systems containing reconfigurable components. The profiling framework consists of two parts, a static part which extracts information related to memory accesses during the execution of an application. We present an advanced memory access profiling toolset that provides a detailed overview of the runtime behavior of the memory access pattern of an application. This information can be used for partitioning and mapping purposes. The second part involves a statistical model that allows to make predictions early in the design phase regarding memory and hardware usage based on software metrics. We examine in detail real applications from the multimedia domain to validate and specify all the potentials of the profiling framework in the DWB.

Bio:

Koen Bertels is an associate professor in the Computer Engineering group where he heads the Delft Workbench Project that aims to develop an entire tool chain offering semi-automated support to developers for developing new or porting existing applications to heterogeneous multi-core systems. The Delft Workbench assumes the Molen machine organisation. The Delft Workbench team comprises 12 PhD students, one post doc and several Msc. students. He has participated in several Dutch or EU projects such as Morpheus and was scientific coordinator of the hArtes project, which also lead to the creation of a spin off company BlueBee. Arash Ostadzadeh and Roel Meeuws are PhD students in the Computer Engineering Lab, working under the supervision of Koen Bertels.

How Parameterizable Run-time FPGA-Reconfiguration Can Benefit Adaptive Embedded Systems,
Dr. Dirk Stroobandt,

Dirk Stroobandt
How Parameterizable Run-time FPGA- Reconfiguration can Benefit Adaptive Embedded Systems,
Dirk Stroobandt, Karel Bruneel
Ghent University, Belgium

Time: 11:00 - 11:20am
Location: Gold Room

Abstract:

In this presentation, we assume that runtime adaptive embedded systems have proven benefits over static implementations and we ask ourselves how such an adaptive system could be implemented. It is clear that the system adaptation should be done very fast so that the overhead of adapting the system does not overshadow the benefits obtained by the adaptivity. In this presentation, we propose a methodology for FPGA design that allows a fast reconfiguration for dynamic datafolding applications. Dynamic Data Folding (DDF) is a technique to dynamically specialize an FPGA configuration according to the values of a set of parameters. The general idea of DDF is that each time the parameter values change, the device is reconfigured with a configuration that is specialized for the new parameter values. Since specialized configurations are smaller and faster than their generic counterpart, the hope is that their corresponding system implementation will be more cost efficient. In this presentation, we show that DDF can be implemented on current commercial FPGAs by using the parameterizable run-time reconfiguration methodology. This methodology comprises a tool flow that automatically transforms DDF applications to a runtime adaptive implementation. The tool flow consists of two parts. The offline (static) tool flow optimizes FPGA configurations with the standard FPGA implementation steps (synthesis, technology mapping, placement and routing) but taking care of the parameters in a propoer way. The online (runtime) toolflow takes the result of the static tool flow, evaluates the reconfiguration procedure for the current value of the parameters and (partially) reconfigures the device (automatically). Experimental results with this tool flow show that we can reap the benefits (smaller area and faster clocks) without too much reconfiguration overhead.

Bio:

Dirk Stroobandt graduated in 1994 and obtained the Ph.D. degree in 1998 in electrotechnical engineering from Ghent University, Belgium. Until 2002 he was post-doctoral fellow with the Fund for Scientific Research - Flanders (Belgium) (F.W.O.) and and 2002 he was appointed professor at Ghent University, affiliated with the Department of Electronics and Information Systems (ELIS), Computer Systems Lab (CSL). He currently leads the research group HES of about 10 people with interests in semi-automatic hardware design methodologies and tools, run-time reconfiguration, and reconfigurable multiprocessor networks. Dirk Stroobandt is the inaugural winner of the ACM/SIGDA Outstanding Doctoral Thesis Award in Design Automation (1999) and received the `Scientific prize Alcatel Bell' for his work in 2002. Dirk Stroobandt visited the lab of Prof. Fadi J. Kurdahi at UCI (1997) and also the group of Andrew B. Khang at UCLA for a year as a post-doctoral researcher (1999-2000). Dirk Stroobandt initiated and co-organized the International Workshop on System-Level Interconnect Prediction (SLIP) in 1999 and is still actively involved in this workshop. He is guest editor of two special issues of the IEEE Transactions on VLSI Systems on System-Level Interconnect Prediction and a special issue on SLIP for Integration, the VLSI Journal. He was also associate editor of ACM's TODAES.

How to Effectively Program Reconfigurable Multi-Core Embedded Systems?,
Chair Dr. Pedro C. Diniz

How to Effectively Program Reconfigurable Multi-Core Embedded Systems?,

Dr. Pedro C. Diniz
Chair:
Dr. Pedro C. Diniz,
INESC-ID, Lisbon, Portugal

Bio:

Dr. Diniz received his M.S. in Electrical and Computer Engineering from the Technical University in Lisbon, Portugal and his Ph.D. from the University of California, Santa Barbara in Computer Science in 1997. From 1997 until 2007 he was a researcher with the University of Southern California’s Information Sciences Institute (USC/ISI) and an Assistant Professor of Computer Science at the University of Southern California in Los Angeles, California. At USC/ISI was the technical lead of DARPA-funded and DoE-funded research projects, in particular in the DEFACTO project totaling $6M USD. The DEFACTO project combined the strengths of traditional compilation approaches with commercially available EDA synthesis tools and lead to the development of a prototype compiler for the automatically mapping of image processing algorithms written in programming languages such as C to Field-Programmable-Gate- Array-based computing architectures. He has authored or co-authored many internationally recognized scientific journal papers and over 50 international conference papers. He is heavily involved in the scientific community having participated as part of the technical program committee of over 20 international conferences in the area of high-performance computing, reconfigurable and field-programmable computing. He is currently one of the scientific coordinators of the EU-funded REFLECT research project addressing issues of programmability of reconfigurable architectures.

Abstract

The ability to continually increase the number of available transistors on a die has lead to the emergence of the many-core and multi-core computing architectures promising the potential for order of magnitude performance improvements over single core solution through sheer concurrency. The abundance of transistors also enable the development of heterogeneous and (dynamically) reconfigurable architectures targeting selected markets such as real-time and/or embedded high-performance computing through a mix of traditional and customized cores.

These architectures offer a wide variety of resources and for efficiency expose distinct execution models (e.g., threading and streaming) as well as a wide range of hardware resources (e.g., internal memories, custom configurable caches or dedicated functional units). This diversity has exacerbated the already complex application developing process as programmers must be aware and explicitly manage all the details of the target systems. The lack of powerful abstractions at various levels forces a plethora of tools to coexist leaving the programmer to bridge the gap between them at huge development costs. Developers cannot easily express high-level applications requirements in the de facto standard programming languages (such as data rates or throughput requirements) or cannot rely on flexible resource management layers with application-specific resources management policies to best leverage the available resources. As a result applications typically do not exploit the true potential of the target architectures.

This session presents a focused set of EU-funded research projects describing approaches to various programming issues raised by these multi-core heterogeneous and possible reconfigurable architectures. First we present the MORPHEUS architecture a very dynamic reconfigurable architecture that exposes the raw programming issues at the hardware level such as the allocation and configuration of heterogeneous resources at various levels of granularity. The MORPHEUS and the hArtes projects aimed at offering programmable mechanisms at the lowest level of the architecture to enable the management of these resources at the software level. Further up in the abstraction chain, the 2PARMA project tackles the programmability complexity by offering a set of flexible abstractions at the run-time level that offer adaptive task and data allocation as well as scheduling of the different tasks and the accesses to the data. This approach provides a layer to the compiler that can effectively leverage application knowledge to best exploit the heterogeneity of the target architecture. Lastly at the highest level, the REFLECT project attacks the programmer productivity issues by combining aspect-oriented programming techniques with a domain-specific language to enable programmers to express non-functional applications requirement beyond what current high-level programming language allow. Combined, these three projects propose specific solutions that address the key aspect of programmability of these configurable multi-core heterogeneous systems at the key levels of abstraction of the programmability challenge, namely, at the level of hardware, operating and run-time system and at the high-level programming language.

Undoubtedly the problems raised by the emerging multi-core heterogeneous computing platforms are inherently very hard. We believe that only holistic approaches such as the ones targeted by the research projects presented in this session will have some chance of success. The continued funding in these areas of research and the growing interest from industry in tackling them does show how critical and widespread these issues have become. This session should therefore be of interest to any engineer and developer of current embedded solutions, as reconfigurable multi-core systems will undoubtedly take center stage in the upcoming decade as the de facto standard computing platform.

Papers

Heterogeneous Multicore Computing: Challenges and Opportunities. Experiences from the hArtes Project,
Prof. Koen Bertels

Prof. Koen Bertels
Heterogeneous Multicore Computing: Challenges and Opportunities. Experiences from the hArtes Project,
Prof. Koen Bertels and Vlad-Mihai Sima and Georgi Kuzmanov,
Technical University of Delft (TUD), Germany
Time: 08:10 - 08:30am
Location: Gold Room

Abstract:

This paper discusses the different problems that were encountered during the hArtes project and how those challenges where met. The paper presents the hArtes hardware platform as well as some of the applications that were mapped on the board. The mapping process involve determining what parts of the application should be executed by what hardware component. When viewed in isolation, kernel based acceleration can produce significant speedups. However, when mapping the entire application this potential never seems to live up to its full potential, due to other issues than mere Amdahl's law. Many problems have to do with communication bottlenecks. And as hArtes always used sequential C-code as starting point, finding enough parallelism in those applications was also one of the other limitations limiting overall performance improvement.

Bio:

Koen Bertels is an associate professor in the Computer Engineering group where he heads the Delft Workbench Project that aims to develop an entire tool chain offering semi-automated support to developers for developing new or porting existing applications to heterogeneous multi-core systems. The Delft Workbench assumes the Molen machine organisation. The Delft Workbench team comprises 12 PhD students, one post doc and several Msc. students. He has participated in several Dutch or EU projects such as Morpheus and was scientific coordinator of the hArtes project, which also lead to the creation of a spin off company BlueBee. Arash Ostadzadeh and Roel Meeuws are PhD students in the Computer Engineering Lab, working under the supervision of Koen Bertels.

Runtime Resource Runtime Resource Management Techniques for Many-core Architectures: The 2PARMA Approach,
Dr. Alexandros Bartzas,

Dr. Alexandros Bartzas
Runtime Resource Runtime Resource Management Techniques for Many-core Architectures: The 2PARMA Approach,
Dr. Alexandros Bartzas, et al.,
Institute of Communications and Computer Systems, Athens, Greece,

Time: 10:40 - 11:00am
Location: Gold Room

Abstract:

The current trend in computing architectures is to replace complex superscalar architectures with meshes of small homogeneous processing units connected by an on-chip network. This trend is mostly dictated by inherent silicon technology frontiers, which are getting as closer as the process densities levels increase. The number of cores to be integrated in a single chip is expected to rapidly increase in the coming years, moving from multi-core to many-core architectures. This trend will require a global rethinking of software and hardware approaches. Multi-core architectures are nowadays prevalent in general purpose computing and in high performance computing. In addition to dual- and quad-core general purpose processors, more scalable multi-core architectures are widely adopted for high-end graphics and media processing.

Real-time applications, hard or soft, are raising the challenge of unpredictability. This is an extremely difficult problem in the context of modern, dynamic, multiprocessor platforms which, while providing potentially high performance, make the task of timing prediction extremely difficult. Also, with the growing software content in embedded systems and the diffusion of highly programmable and re-configurable platforms, software is given an unprecedented degree of control on resource utilization. Software generation and performance evaluation tools have to be made aware of the particularities of a certain memory hierarchy, or the dynamic features of the processor microarchitecture, such as to be able to both generate efficient code and accurately predict performance numbers. Existing approaches that are looking into runtime resource management still require big design time efforts. During this design time efforts profiling information is gathered and analysed in order to construct a runtime scheduler that can be lightweight. One can therefore note that there is a trade-of to be made between design time effort and runtime effort, where the approaches mentioned above favour more design time effort to require less runtime effort by the runtime resource manager.

In this paper we present a run-time resource manager (RTRM) for many-core architectures. This RTRM will offer adaptive task and data allocation as well as scheduling of the different tasks and the accesses to the data. Furthermore, the adequate power management techniques as well as the integration to the Linux OS will be developed. RTRM will take into account: i) the requirements/specifications of many-core architectures, applications and design techniques; ii) the dynamic compilation chain and OS support for resource management and iii) a design space exploration phase.

Bio:

N/A

A New Approach to Control and Guide the Mapping of Computations to FPGAs,
Prof. João M. P. Cardoso, et al.,

Prof. João M.P. Cardoso,
A New Approach to Control and Guide the Mapping of Computations to FPGAs,
Prof. João M.P. Cardoso, et al.,
University of Porto, Portugal
Time: 09:10am - 09:40am
Location: Gold Room

Abstract:

Field-Programmable Gate-Arrays (FPGAs) are becoming increasingly popular as computing platforms for high-performance embedded systems. Their flexibility and customization capabilities allows them to achieve orders of magnitude better performance than conventional embedded computing systems. Programming FPGAs is, however, extremely cumbersome and error-prone and as a result their true potential is only achieved, very often at unreasonably high design efforts. The REFLECT (Rendering FPGAs to Multi-Core Embedded Computing) project&rsquo:s design flow consists of a novel compilation and synthesis system approach for FPGA-based platforms. The REFLECT&rsquo:s design flow relies on Aspect-Oriented Specifications to convey critical domain knowledge to optimizers and mapping engines. An aspect-oriented programming language, LARA (LAnguage for Reconfigurable Architectures), allows the exploration of alternative architectures and design patterns enabling the generation of flexible hardware cores that can be easily incorporated into larger multi-core designs. We are evaluating the effectiveness of the proposed approach audio processing and real-time avionics codes. We expect the technology developed in REFLECT to be integrated by our industrial partners, in particular by ACE, a leading compilation tool supplier for embedded systems, and by Honeywell, a worldwide solution supplier of embedded high-performance systems.

The REFLECT&rsquo:s design flow has unique characteristics that allow it to both adapt to and meet various non-functional requirements. Three most valuable techniques are being integrated in the REFLECT&rsquo:s design flow: aspects to express non-functional requirements and user&rsquo:s knowledge about the algorithm; successive refinement and code transformations that allow the input application to be transformed in a stepwise fashion in the direction of the target architecture and to requirements; best practices that capture information about previous algorithms/implementations; an aspect/strategic-oriented domain-specific language (LARA) to encapsulate aspects and design patterns. We describe in this paper the REFLECT&rsquo:s design flow and how aspects and strategies are being used to map computations to FPGA based systems. In particular, we will show experimental results obtained by mapping kernels from two avionics applications when considering strategies suited to meet high-performance requirements.

Bio:

He is an Associate Professor with tenure at the Department of Informatics Engineering, Faculty of Engineering of the University of Porto. Before, he was with the IST/UTL (2006-2008), a senior researcher at INESC-ID (2001-2009), and with the University of Algarve (1993-2006). In 2001/2002, he worked during one year for PACT XPP Technologies, Inc., Munich, Germany. He received a 5-year Electronics and Telecommunications Engineering degree from the University of Aveiro in 1993, and an MSc and a PhD degree in Electrical and Computer Engineering from the IST/UTL (Technical University of Lisbon), Lisbon, Portugal in 1997 and 2001, respectively. He has coordinated and participated in various R&D projects, participated in the organization of a number of international conferences, served as a program committee member for various international conferences, and participated as a reviewer for various international scientific journals.

He has participated in the organization of a number of conferences (e.g., RAW&rsquo:10, FPL&rsquo:03, FPL&rsquo:07, FPL&rsquo:08, ARC&rsquo:05, ARC&rsquo:06, ARC&rsquo:07) and he serves(ed) as a Program Committee member for various international conferences (e.g., IEEE FPT, IEEE SASP, FPL, IC-SAMOS, ACM SAC-EMBS, ARC). He serves(ed) as reviewer for various international scientific journals (e.g., “IEEE Transactions on Computers”, “IEEE Transactions on VLSI”, “Elsevier Microprocessors and Microsystems”, “Elsevier Journal of Systems Architecture”, “IEEE Computer Magazine”, “IEEE Transactions on Education”, “Elsevier International Journal on Computers and Electrical Engineering”, “IEEE Transactions on Industrial Electronics”, “Elsevier Parallel Computing”, “IEEE Design & Test of Computers”).

He is co-author of the book “Compilation Techniques for Reconfigurable Architectures,” published by Springer, and co-editor of two Springer LNCS volumes. He has (co-)authored over 80 scientific publications (including journal/conference papers and patents) on subjects related to compilers, embedded systems, and reconfigurable computing. His research interests include reconfigurable computing, compilation techniques, domain-specific languages, high-performance embedded computing, and hardware accelerators.

A Run-Time Evolvable Hardware, Prof. Jim Tørresen

Prof. Jim Tørresen
A Run-Time Evolvable Hardware,
Prof. Jim Tørresen,
University of Oslo, Norway
currently a visiting professor at Cornell University

Abstract:

Traditional hardware design aims at creating circuits which, once fabricated, remain static during run-time. This changed with the introduction of reconfigurable technology and devices (typically FPGAs) which opened up the possibility of dynamic hardware. However, the potential of dynamic hardware for the construction of self-adaptive, self-optimizing and self-healing systems can only be realized if automatic design schemes are available.

One such method for automatic design is evolvable hardware. Evolvable hardware was introduced more than ten years ago as a new way of designing electronic circuits. Only input/output relations of the desired function need to be specified, the design process is then left to an adaptive algorithm inspired from natural evolution. The design is based on incremental improvement of a population of initially randomly generated circuits. Circuits among the best ones have the highest probability of being combined to generate new and possibly better circuits. Combination is by crossover and mutation operation of the circuit description.

In this tutorial, an introduction to evolutionary computation and how it can be applied to hardware evolution would be given. That is, an overview of how evolvable hardware can be applied to provide run time adaptivity for systems within e.g. classification applied to real-world applications will be the main content of the tutorial.

Bio:

Jim Torresen received his M.Sc. and Dr.ing. (Ph.D) degrees in computer architecture and design from the Norwegian University of Science and Technology, University of Trondheim in 1991 and 1996, respectively. He has been employed as a senior hardware designer at NERA Telecommunications (1996-1998) and at Navia Aviation (1998-1999). Since 1999, he has been a professor at the Department of Informatics at the University of Oslo (associate professor 1999-2005). Jim Torresen has been a visiting researcher at Kyoto University, Japan for one year (1993-1994), four months at Electrotechnical laboratory, Tsukuba, Japan (1997 and 2000) and is now a visiting professor at Cornell University.

His research interests at the moment include bio-inspired computing, machine learning, reconfigurable hardware, robotics and applying this to complex real-world applications. Several novel methods have been proposed. He has published a number of scientific papers in international journals, books and conference proceedings. 10 tutorials and several invited talks have been given at international conferences. He is in the program committee of more than ten different international conferences as well as a regular reviewer of a number of international journals. He has also acted as an evaluator for proposals in EU FP7.

A list and collection of publications can be found at the following web page: http://www.ifi.uio.no/~jimtoer/papers.html

More information on the web:
http://www.ifi.uio.no/~jimtoer
http://www.matnat.uio.no/forskning/prosjekter/crc

N/A

Dr Toomas P Plaks

Conference Chair
Dr Toomas P Plaks

London
Contact the Chair

Important Dates

Proposals for sessions:
  • Specialised Research Sessions
    Jan. 15, 2011
  • Research Project Sessions
    March 1, 2011
Regular Papers:
  • Submission is open until
    April 15, 2011
  • Notification
    May 4, 2011
  • Final Papers
    May 18, 2011

E-mail Directory

  • General Inquiries:
    inf@ersaconf.org
  • Paper Submission:
    sub@ersaconf.org
    No inquiries
  • CFP are sent:
    mail@ersaconf.org
    Don't reply

WEB Directory

  • ERSA HOMEPAGE:
    http://ersaconf.org
  • ERSA News:
    http://ersaconf.org/news
  • ERSA Conferences:
    http://ersaconf.org/ersa##
    where ## is 07, 08, 09, 10, 11
  • ERSA Mobile:
    http://ersaconf.org/ersa##_mobi
  • ERSA Archive:
    http://ersaconf.org/arhcive