T-CREST Website2024-03-28T22:13:23ZEvangelia Kasapakihttp://www.t-crest.org/profile/EvangeliaKasapakihttp://storage.ning.com/topology/rest/1.0/file/get/1671769543?profile=original&width=48&height=48&crop=1%3A1&xj_user_default=1http://www.t-crest.org/forum/topic/listForContributor?user=2x8nqnzzzjm2a&feed=yes&xn_auth=noRefinement of Worst-Case Execution Time Bounds by Graph Pruningtag:www.t-crest.org,2014-10-04:6561183:Topic:41172014-10-04T12:24:44.946ZEvangelia Kasapakihttp://www.t-crest.org/profile/EvangeliaKasapaki
<p>Florian Brandner and Alexander Jordan.</p>
<p><em>Refinement of Worst-Case Execution Time Bounds by Graph Pruning.</em></p>
<p>Computer Languages, Systems & Structures (COMLAN), 2014.</p>
<p><br></br> <strong>DOI:</strong> <a class="moz-txt-link-freetext" href="http://dx.doi.org/10.1016/j.cl.2014.09.001">http://dx.doi.org/10.1016/j.cl.2014.09.001</a></p>
<p></p>
<p><strong>Abstract:</strong></p>
<blockquote><p id="sp0060">As real-time systems increase in complexity to provide more and more…</p>
</blockquote>
<p>Florian Brandner and Alexander Jordan.</p>
<p><em>Refinement of Worst-Case Execution Time Bounds by Graph Pruning.</em></p>
<p>Computer Languages, Systems & Structures (COMLAN), 2014.</p>
<p><br/> <strong>DOI:</strong> <a class="moz-txt-link-freetext" href="http://dx.doi.org/10.1016/j.cl.2014.09.001">http://dx.doi.org/10.1016/j.cl.2014.09.001</a></p>
<p></p>
<p><strong>Abstract:</strong></p>
<blockquote><p id="sp0060">As real-time systems increase in complexity to provide more and more functionality and perform more demanding computations, the problem of statically analyzing the Worst-Case Execution Time bound (WCET) of real-time programs is becoming more and more time-consuming and imprecise.</p>
<p id="sp0065">The problem stems from the fact that with increasing program size, the number of potentially relevant program and hardware states that need to be considered during WCET analysis increases as well. However, only a relatively small portion of the program actually contributes to the final WCET bound. Large parts of the program are thus irrelevant and are analyzed in vain. In the best case this only leads to increased analysis time. Very often, however, the analysis of irrelevant program parts interferes with the analysis of those program parts that turn out to be relevant.</p>
<p id="sp0070">We explore a novel technique based on <em>graph pruning</em> that promises to reduce the analysis overhead and, at the same time, increase the analysis’ precision. The basic idea is to eliminate those program parts from the analysis problem that are known to be irrelevant for the final WCET bound. This reduces the analysis overhead, since only a subset of the program and hardware states have to be tracked. Consequently, more aggressive analysis techniques may be applied, effectively reducing the overestimation of the WCET. As a side-effect, interference from irrelevant program parts is eliminated, e.g., on addresses of memory accesses, on loop bounds, or on the cache or processor state.</p>
<p id="sp0075">First experiments using a commercial WCET analysis tool show that our approach is feasible in practice and leads to reductions of up to 12% when a standard IPET approach is used for the analysis.</p>
</blockquote> Towards Automated Generation of Time-Predictable Codetag:www.t-crest.org,2014-09-23:6561183:Topic:41132014-09-23T08:57:30.270ZEvangelia Kasapakihttp://www.t-crest.org/profile/EvangeliaKasapaki
<p>Daniel Prokesch, Benedikt Huber and Peter Puschner.</p>
<p><em>Towards Automated Generation of Time-Predictable Code.</em></p>
<p>In <em>Int. Workshop on Worst-Case Execution Time Analysis</em>, volume 39 of OASIcs, pages<br/> 103–112. Schloss Dagstuhl, 2014.</p>
<p></p>
<p><font face="arial,"><font size="-1">DOI: <a href="http://dx.doi.org/10.4230/OASIcs.WCET.2014.103">10.4230/OASIcs.WCET.2014.103</a></font></font></p>
<p></p>
<p>Daniel Prokesch, Benedikt Huber and Peter Puschner.</p>
<p><em>Towards Automated Generation of Time-Predictable Code.</em></p>
<p>In <em>Int. Workshop on Worst-Case Execution Time Analysis</em>, volume 39 of OASIcs, pages<br/> 103–112. Schloss Dagstuhl, 2014.</p>
<p></p>
<p><font face="arial,"><font size="-1">DOI: <a href="http://dx.doi.org/10.4230/OASIcs.WCET.2014.103">10.4230/OASIcs.WCET.2014.103</a></font></font></p>
<p></p> The T-CREST Approach of Compiler and WCET-Analysis Integrationtag:www.t-crest.org,2014-09-23:6561183:Topic:40112014-09-23T08:48:56.586ZEvangelia Kasapakihttp://www.t-crest.org/profile/EvangeliaKasapaki
<p><span class="publistentry">P. Puschner, D. Prokesch, B. Huber, J. Knoop, S Hepp, G. Gebhard: <br/><i>The T-CREST Approach of Compiler and WCET-Analysis Integration</i></span></p>
<p><span class="publistentry">In: "<i>Proceedings of the 9th Workshop on Software Technologies for Future Embedded and Ubiquitous Systems</i>", (2013).</span></p>
<p></p>
<p></p>
<p><span class="publistentry">P. Puschner, D. Prokesch, B. Huber, J. Knoop, S Hepp, G. Gebhard: <br/><i>The T-CREST Approach of Compiler and WCET-Analysis Integration</i></span></p>
<p><span class="publistentry">In: "<i>Proceedings of the 9th Workshop on Software Technologies for Future Embedded and Ubiquitous Systems</i>", (2013).</span></p>
<p></p>
<p></p> Criticality: static profiling for real-time programstag:www.t-crest.org,2014-09-16:6561183:Topic:41072014-09-16T08:25:58.588ZEvangelia Kasapakihttp://www.t-crest.org/profile/EvangeliaKasapaki
<p>Florian Brandner, Stefan Hepp, and Alexander Jordan.</p>
<p><em>Criticality: static profiling for real-time programs.</em></p>
<p>Real-Time Systems (RTS), May 2014, Volume 50, Issue 3, <span id="page-range">pp 377-410</span>.</p>
<p></p>
<p><strong>DOI:</strong> <a href="http://dx.doi.org/10.1007/s11241-013-9196-y" target="_blank">http://dx.doi.org/10.1007/s11241-013-9196-y</a></p>
<p></p>
<p><strong>Abstract:…</strong></p>
<blockquote><div class="abstract-content formatted"></div>
</blockquote>
<p>Florian Brandner, Stefan Hepp, and Alexander Jordan.</p>
<p><em>Criticality: static profiling for real-time programs.</em></p>
<p>Real-Time Systems (RTS), May 2014, Volume 50, Issue 3, <span id="page-range">pp 377-410</span>.</p>
<p></p>
<p><strong>DOI:</strong> <a href="http://dx.doi.org/10.1007/s11241-013-9196-y" target="_blank">http://dx.doi.org/10.1007/s11241-013-9196-y</a></p>
<p></p>
<p><strong>Abstract:</strong></p>
<blockquote><div class="abstract-content formatted"><p class="a-plus-plus">With the increasing performance demand in real-time systems it becomes more and more important to provide feedback to programmers and software development tools on the performance-relevant code parts of a real-time program. So far, this information was limited to an estimation of the worst-case execution time (WCET) and its associated worst-case execution path (WCEP) only. However, both, the WCET and the WCEP, only provide partial information. Only code parts that are on <em class="a-plus-plus">one</em> of the WCEPs are indicated to the programmer. <em class="a-plus-plus">No</em> information is provided for all other code parts. To give a comprehensive view covering the entire code base, tools in the spirit of program profiling are required.</p>
<p class="a-plus-plus">This work proposes an efficient approach to compute worst-case timing information for all code parts of a program using a complementary metric, called <em class="a-plus-plus">criticality</em>. Every statement of a program is assigned a criticality value, expressing how critical the code is with respect to the global WCET. This gives valuable information how close the worst execution path passing through a specific program part is to the global WCEP. We formally define the criticality metric and investigate some of its properties with respect to dominance in control-flow graphs. Exploiting some of those properties, we propose an algorithm that reduces the overhead of computing the metric to cover complete programs. We also investigate ways to efficiently find only those code parts whose criticality is above a given threshold.</p>
<p class="a-plus-plus">Experiments using well-established real-time benchmark programs show an interesting distribution of the criticality values, revealing considerable amounts of highly critical as well as uncritical code. The metric thus provides ideal information to programmers and software development tools to optimize the worst-case execution time of these programs.</p>
</div>
</blockquote>
<p></p> Subgraph-Based Refinement of Worst-Case Execution Time Boundstag:www.t-crest.org,2014-09-16:6561183:Topic:40072014-09-16T08:18:14.294ZEvangelia Kasapakihttp://www.t-crest.org/profile/EvangeliaKasapaki
<p>Florian Brandner and Alexander Jordan.</p>
<p><em>Subgraph-Based Refinement of Worst-Case Execution Time Bounds.</em></p>
<p>Technical Report.</p>
<p></p>
<p>Link: <a href="http://hal-ensta.archives-ouvertes.fr/hal-00978015" target="_blank">http://hal-ensta.archives-ouvertes.fr/hal-00978015</a></p>
<p></p>
<p><strong>Abstract:</strong></p>
<blockquote><p>As real-time systems increase in complexity to provide more and more functionality and perform more demanding computations, the problem of…</p>
</blockquote>
<p>Florian Brandner and Alexander Jordan.</p>
<p><em>Subgraph-Based Refinement of Worst-Case Execution Time Bounds.</em></p>
<p>Technical Report.</p>
<p></p>
<p>Link: <a href="http://hal-ensta.archives-ouvertes.fr/hal-00978015" target="_blank">http://hal-ensta.archives-ouvertes.fr/hal-00978015</a></p>
<p></p>
<p><strong>Abstract:</strong></p>
<blockquote><p>As real-time systems increase in complexity to provide more and more functionality and perform more demanding computations, the problem of statically analyzing the Worst-Case Execution Time bound (WCET) of real-time programs is becoming more and more time-consuming and imprecise. The problem stems from the fact that with increasing program size also the number of potentially relevant program and hardware states to be considered during the WCET analysis increases. However, only a relatively small portion of the program actually contributes to the final WCET bound. Large parts of the program are thus irrelevant and are analyzed in vain. In the best case this only leads to increased analysis time. Very often, however, the analysis of irrelevant program parts interferes with the analysis of those program parts that turn out to be relevant. We explore a novel technique based on graph pruning that promises to reduce the analysis overhead and, at the same time, increase the analysis' precision. The basic idea is to eliminate those program parts from the analysis problem that are known to be irrelevant for the final WCET bound. This reduces the analysis overhead, since only a subset of the program and hardware states have to be tracked. Consequently, more aggressive analysis techniques can be applied to the smaller problem, effectively reducing the overestimation of the WCET. As a side-effect, interference from irrelevant program parts are eliminated, e.g., on addresses of memory accesses, on loop bounds, or on the cache or processor state. First experiments using a commercial WCET analysis tool show that our approach is feasible in practice and leads to reductions of up to 6% when a standard IPET approach is used for the analysis. <img alt="" src="http://static.archives-ouvertes.fr/images/vide.gif" border="0" height="5" width="1"/></p>
</blockquote> Splitting Functions into Single-Entry Regionstag:www.t-crest.org,2014-09-12:6561183:Topic:38072014-09-12T13:34:57.374ZEvangelia Kasapakihttp://www.t-crest.org/profile/EvangeliaKasapaki
<p>Stefan Hepp, and Florian Brandner.</p>
<p><em>Splitting Functions into Single-Entry Regions</em></p>
<p>International Conference on Compilers, Architecture, and Synthesis for Embedded Systems (CASES 2014). New Delhi, India. To appear.</p>
<p></p>
<p><strong>Abstract:</strong></p>
<p>As the performance requirements of today's real-time systems are on the rise, system<br></br> engineers are increasingly forced to optimize and tune the execution time of<br></br> real-time software. Apart from usual…</p>
<p>Stefan Hepp, and Florian Brandner.</p>
<p><em>Splitting Functions into Single-Entry Regions</em></p>
<p>International Conference on Compilers, Architecture, and Synthesis for Embedded Systems (CASES 2014). New Delhi, India. To appear.</p>
<p></p>
<p><strong>Abstract:</strong></p>
<p>As the performance requirements of today's real-time systems are on the rise, system<br/> engineers are increasingly forced to optimize and tune the execution time of<br/> real-time software. Apart from usual optimizations targeting the average-case<br/> performance of a program, the worst-case execution time bound (WCET) delivered<br/> by program analysis tools often has to be improved to meet all the deadlines<br/> and ensure a safe operation of the entire system.<br/><br/> Modern computer architectures pose a significant challenge to this task due to<br/> their high complexity. Out-of-order execution, speculation, caches, buffers,<br/> and branch predictors highly depend on the execution history and are thus<br/> difficult to analyze precisely for WCET analysis tools. Time-predictable<br/> computer architectures overcome this problems by specifically designed<br/> hardware components that are amenable to static program analysis.<br/><br/> A recently proposed alternative for caching executable code, i.e.,<br/>instructions, is the so-called method cache. Instead of a traditional<br/> block-based cache design, the method cache operates on larger code blocks<br/> under the control of the compiler. Due to its design, the analysis of the method<br/> cache is simplified. At the same time, such a system now<br/> highly depends on the compiler and its ability to form suitable code blocks<br/> for caching.<br/><br/>We propose a simple function splitting technique that derives a<br/> suitable partitioning of the basic blocks in a program, targeting<br/> the method cache of the time-predictable processor Patmos. Our approach<br/> exploits dominance properties to form code regions respecting<br/> the method cache's parameters as well as constraints of Patmos' instruction<br/> set architecture. Experimental results show that the method cache can be<br/> competitive with typical instruction cache configurations given the right<br/> splitting.</p> Scope-based Method Cache Analysistag:www.t-crest.org,2014-09-12:6561183:Topic:41032014-09-12T13:20:16.870ZEvangelia Kasapakihttp://www.t-crest.org/profile/EvangeliaKasapaki
<p>Benedikt Huber, Stefan Hepp, and Martin Schoeberl.</p>
<p><em>Scope-Based Method Cache Analysis.</em></p>
<p>14th International Workshop on Worst-Case Execution Time Analysis, 2014, Madrid, Spain.</p>
<p></p>
<p>DOI: <a href="http://dx.doi.org/10.4230/OASIcs.WCET.2014.73" target="_blank">10.4230/OASIcs.WCET.2014.73</a></p>
<p></p>
<p><strong>Abstract:</strong><br></br> The quest for time-predictable systems has led to the exploration of new hardware architectures that simplify analysis and…</p>
<p>Benedikt Huber, Stefan Hepp, and Martin Schoeberl.</p>
<p><em>Scope-Based Method Cache Analysis.</em></p>
<p>14th International Workshop on Worst-Case Execution Time Analysis, 2014, Madrid, Spain.</p>
<p></p>
<p>DOI: <a href="http://dx.doi.org/10.4230/OASIcs.WCET.2014.73" target="_blank">10.4230/OASIcs.WCET.2014.73</a></p>
<p></p>
<p><strong>Abstract:</strong><br/> The quest for time-predictable systems has led to the exploration of new hardware architectures that simplify analysis and reasoning in the temporal domain, while still providing competitive performance. For the instruction memory, the method cache is a conceptually attractive solution, as it requests memory transfers at well-defined instructions only. In this article, we present a new cache analysis framework that generalizes and improves work on cache persistence analysis. The analysis demonstrates that a global view on the cache behavior permits the precise analyses of caches which are hard to analyze by inspecting cache state locally.</p> Alignment of Memory Transfers of a Time-Predictable Stack Cachetag:www.t-crest.org,2014-09-10:6561183:Topic:41012014-09-10T08:06:35.815ZEvangelia Kasapakihttp://www.t-crest.org/profile/EvangeliaKasapaki
<p>Sahar Abbaspour and Florian Brandner<br></br> <strong>Junior Researcher Workshop on Real-Time Computing (JRWRTC)</strong><br></br> to appear</p>
<p></p>
<p>Abstract</p>
<p></p>
<p>Modern computer architectures use features which often complicate the WCET<br></br>analysis of real-time software. Alternative time-predictable designs, and<br></br>in particular caches, thus are gaining more and more interest. A recently<br></br>proposed<br></br>stack cache, for instance, avoids the need for the analysis of complex cache…</p>
<p>Sahar Abbaspour and Florian Brandner<br/> <strong>Junior Researcher Workshop on Real-Time Computing (JRWRTC)</strong><br/> to appear</p>
<p></p>
<p>Abstract</p>
<p></p>
<p>Modern computer architectures use features which often complicate the WCET<br/>analysis of real-time software. Alternative time-predictable designs, and<br/>in particular caches, thus are gaining more and more interest. A recently<br/>proposed<br/>stack cache, for instance, avoids the need for the analysis of complex cache <br/>states. Instead, only the occupancy level of the cache has to be determined.<br/><br/>The memory transfers generated by the standard stack cache are not generally<br/>aligned. These unaligned accesses risk to introduce complexity to the<br/>otherwise simple WCET analysis. In this work, we investigate three different <br/>approaches to handle the alignment problem in the stack cache: (1) unaligned <br/>transfers, (2) alignment through compiler-gen\-erated padding, (3) a novel<br/>hardware extension ensuring the alignment of all transfers. Simulation results<br/>show that<br/>our hardware extension offers a good compromise between average-case performance <br/>and analysis complexity.</p> Lazy Spilling for a Time-Predictable Stack Cache: Implementation and Analysistag:www.t-crest.org,2014-07-10:6561183:Topic:36012014-07-10T08:07:23.163ZEvangelia Kasapakihttp://www.t-crest.org/profile/EvangeliaKasapaki
<p>Lazy Spilling for a Time-Predictable Stack Cache: Implementation and Analysis</p>
<p></p>
<p>Sahar Abbaspour, Alexander Jordan, and Florian Brandner<br/></p>
<p><span style="font-family: arial,helvetica,sans-serif;">Presented at 14th International Workshop on Worst-Case Execution Time Analysis</span></p>
<p></p>
<p><a href="http://drops.dagstuhl.de/opus/volltexte/2014/4607/pdf/10.pdf">http://drops.dagstuhl.de/opus/volltexte/2014/4607/pdf/10.pdf</a></p>
<p>Lazy Spilling for a Time-Predictable Stack Cache: Implementation and Analysis</p>
<p></p>
<p>Sahar Abbaspour, Alexander Jordan, and Florian Brandner<br/></p>
<p><span style="font-family: arial,helvetica,sans-serif;">Presented at 14th International Workshop on Worst-Case Execution Time Analysis</span></p>
<p></p>
<p><a href="http://drops.dagstuhl.de/opus/volltexte/2014/4607/pdf/10.pdf">http://drops.dagstuhl.de/opus/volltexte/2014/4607/pdf/10.pdf</a></p> T-CREST Presentation at HiPEACtag:www.t-crest.org,2014-01-22:6561183:Topic:31122014-01-22T13:53:22.121ZEvangelia Kasapakihttp://www.t-crest.org/profile/EvangeliaKasapaki
<p>T-CREST presentation at the HiPEAC Workshop on Integration of Mixed-criticality Subsystems on Multi-core and Manycore Processors, Martin Schoeberl, Vienna, 21 January 2014</p>
<p>T-CREST presentation at the HiPEAC Workshop on Integration of Mixed-criticality Subsystems on Multi-core and Manycore Processors, Martin Schoeberl, Vienna, 21 January 2014</p>