This is an old revision of the document!
Table of Contents
The TXL source transformation system is widely used in industry and academia for both research and production tasks involving source transformation and software analysis. While it is designed to be accessible to software practitioners, understanding how to use TXL effectively takes time and has a steep learning curve. This tutorial is designed to get you over the initial hump and rapidly move you from a TXL novice to the skills necessary to use it effectively in real applications. Consisting of a combination of one hour lecture presentations followed by one hour practice sessions, this is a hands-on tutorial in which you will quickly learn how to use TXL effectively in your research or industrial practice.
James Cordy is Professor and past Director of the School of Computing at Queen’s University at Kingston, Canada. As leader of the TXL source transformation project with hundreds of academic and industrial users worldwide, he is the author of more than 160 refereed contributions in programming languages, software engineering and artificial intelligence. From 1995-2001 he was Vice President and Chief Research Scientist at Legasys Corporation, whose LS/2000 source code analysis system was responsible for the analysis and reprogramming of over 4.5 billion lines of financial code of the largest Canadian banks for the Year 2000 problem. Dr. Cordy is an ACM Distinguished Scientist, a senior member of the IEEE, and an IBM CAS faculty fellow.
You can’t control what you can’t measure. And you can’t decide if you are wandering around in the dark. Risk management in practice requires shedding light on the internals of the software product in order to make informed decisions. Thus, in practice, risk management has to be based on information about artifacts (documentation, code, and executables) in order to detect (potentially) critical issues.
This tutorial presents experiences from industrial cases world-wide on qualitative and quantitative measurement of software products. We present our lessons learned as well as consolidated experiences from practice and provide a classification scheme of applicable measurement techniques.
Participants of the tutorial will receive an introduction to the techniques in theory and will then apply them in practice in interactive exercises. This enables participants to learn how to shed light on the internals of their software and how to make risk management decisions efficiently and effectively.
Jens Knodel, Matthias Naab – Fraunhofer Institute for Experimental Software Engineering IESE Kaiserslautern, Germany Eric Bouwers, Joost Visser – Software Improvement Group (SIG) Amsterdam, The Netherlands
Every system is a legacy system, the moment a programmer writes a line of code it becomes a legacy. Therefore in even relatively new systems similar to long lived systems, developers are faced with a body of code that they need to understand, and from which they need to extract architectural knowledge. Unfortunately, anecdotal evidence has shown that such knowledge tends to be tacit in nature, stored in the heads of people, and inconsistently scattered across various software artifacts and repositories. Furthermore, architectural knowledge vaporizes over time. Given the size, complexity, and longevity of many projects, developers therefore often lack a comprehensive knowledge of architectural design decisions and consequently make changes in the code that inadvertently degrade the underlying design and compromise its qualities.
This technical briefing will answer three fundamental questions about software architecture recovery: Why? What? and How? Through several examples it articulates and synthesizes technical forces and financial motivations that make software companies to invest in software architecture recovery. It discusses “what” are the pieces of design knowledge that can be recovered and lastly demonstrates a methodology as well as required tools for answering “how” to reconstruct architecture from implementation artifacts.
Mehdi Mirakhorli – Rochester Institute of Technology, USA
Mehdi Mirakhorli is an assistant professor at Rochester Institute of Technology. His research interest focuses on the application of data mining and information retrieval techniques to solve software engineering problems, “software architecture design, implementation, maintenance and reconstruction”, requirements engineering and software traceability. Previously, he worked for seven years as a software architect on large data-intensive software systems in banking, health care and meteorological domains. He has organized a technical briefing on a similar topics of Identifying and Protecting Architecturally Significant Code at Software Engineering Institute (SEI) Architecture Technology User Network (SATURN) Conference , and “Discovering New Patterns by Mining Code Repositories” at Pattern Languages of Programs Conference (PLoP 2014). Dr. Mirakhorli has served as Guest Editor for a special edition of IEEE Software and organizer, committee member and reviewer for several software engineering workshops, conferences and journals. Furthermore he has been speaker in several technical venues such as ALTA Distinguished Speaker at Alcatel-Lucent and technical briefing hold by US government on security architecture. Dr. Mirakhorli has received two ACM SIGSOFT Distinguished Paper Awards at the International Conference on Software Engineering and has been actively engaged in security architecture reconstruction research projects with the US Department of Homeland Security (DHS).
A test oracle is a mechanism or procedure against which the correctness of computed outputs of a program can be verified. When a test oracle does not exist, or it is impractical or infeasible to use it, then the oracle problem is said to occur. The oracle problem has been reported to occur quite frequently. Metamorphic testing has been proposed as a method to alleviate the oracle problem. Since its inception, it has been receiving increasing attention. Research in metamorphic testing can be classified into three categories: application of metamorphic testing in domains with the oracle problem; integration of metamorphic testing with other analysis/testing/reliability methods which assume the availability of a test oracle; and the theory of metamorphic testing. As reported in the Harman et al.’s examination of the oracle problem, metamorphic testing has been playing a significant role in its alleviation (A Comprehensive Survey of Trends in Oracles for Software Testing, by M. Harman, P. McMinn, M. Shahbaz and S. Yoo, Technical Report CS-13-01, Department of Computer Science, University of Sheffield, 2013). Attendees do not need to have any specific background: the tutorial does not have any prerequisite other than a basic knowledge of software engineering. The tutorial is designed for researchers and IT professionals who are working in software reliability, testing, debugging and analysis. It will be particularly interesting and useful to attendees who have encountered the oracle problem in their research or work, as well as to those who have developed techniques that assume the availability of a test oracle.
Tsong Yueh Chen – Swinburne University of Technology, Australia
The presenter, Tsong Yueh Chen, obtained his BSc and MPhil from The University of Hong Kong; MSc and DIC from Imperial College of The London University; and PhD from The University of Melbourne. He is currently a Professor of Software Engineering at Swinburne University of Technology, Australia. Prior to joining Swinburne, he taught at The University of Hong Kong and The University of Melbourne. He is currently on the editorial board of the journal, Software Testing, Verification and Reliability. Professor Chen’s main research interests include software testing, fault localization, fault tolerance, reliability and software quality. He co-authored the first published article on metamorphic testing (with Professor S. C. Cheung and Dr. S. M. Yiu), and has continued to publish many articles about this topic. Professor Chen gave a tutorial on metamorphic testing at the International Conference on Software Quality (QSIC) in 2012, and on December 1, 2014, he presented a similar tutorial at the 21st Asia-Pacific Software Engineering Conference in Jeju, Korea
|Tweets by @SANERconf|