## Theoretical Computer Science Forum## Autumn 2004This is a forum for talks on theoretical computer science organized by the TCS Laboratory at HUT (previously known as the Formal Methods Forum). We meet usually on Fridays at 2 p.m. in room B353 in the Computer Science building. For further information, please contact Ilkka Niemelä.
## Program- 18.8.2004 Heikki Rantanen:
**Analyzing the Random-Walk Algorithm for SAT** (Master's thesis presentation) - 26.8.2004 Edith Elkind (Princeton University):
**How Hard Is It to Manipulate Voting?****Abstract:**We demonstrate how to make voting protocols resistant against manipulations by computationally bounded malicious voters. It extends the recent results of Conitzer and Sandholm in two important directions: we demonstrate how to make voting manipulation as hard as inverting one-way functions (rather than just worst-case hard), and we show that our hardness results hold against a large fraction of manipulating voters (rather than a single voter). Both improvements address important concerns in the field of secure voting systems. We also discuss the limitations of the current approach, showing that it cannot be used to achieve certain very desirable hardness criteria. [slides/.ppt](Joint work with Helger Lipmaa.) - 1.9.2004 Prof.Dr. Jürgen Dix (TU Clausthal, Germany):
**Planning in Answer Set Programming using Ordered Task Decomposition****Abstract:**We investigate a formalism for solving planning problems based on "ordered task decomposition" using Answer Set Programming (ASP). Our planning methodology is an adaptation of Hierarchical Task Network (HTN) planning, a approach that has led recently to very some efficient planners (e.g., SHOP). The ASP paradigm evolved out of the stable semantics for logic programs in recent years and is strongly related to nonmonotonic logics. It also led to various very efficient implementations (smodels, DLV). While all approaches of using ASP for planning considered so far rely on action-based planning, we consider for the first time a formulation of HTN planning as described in the SHOP planning system and define a systematic translation method from SHOP's representation of the planning problem into logic programs with negation. We show that our translation is sound and complete: answer sets of the logic program obtained by our translation correspond exactly to the solutions of the planning problem. Our approach does not rely on a particular system for computing answer sets and serves several purposes. (1) It constitutes a means to evaluate ASP systems by using well-established benchmarks from the planning community. (2) It makes the more expressive HTN planning available in ASP. (3) When our approach is implemented on ASP solvers, its time requirement appears to grow no faster than roughly proportional to that of a dedicated HTN planning system (SHOP). (4) It outperforms action-based planning methods formulated in ASP. (5) It sheds light on the performance of ASP systems using different grounding strategies.(Joint work with Dana Nau and Ugur Kuter.) - 24.9.2004 Tomi Janhunen:
**Capturing Parallel Circumscription with Disjunctive Logic Programs****Abstract:**The stable model semantics of disjunctive logic programs is based on classical models which are minimal with respect to subset inclusion. As a consequence, every atom appearing in a disjunctive program is false by default. This is sometimes undesirable from the knowledge representation point of view and a more refined control of minimization is called for. Such features are already present in Lifschitz's parallel circumscription where certain atoms are allowed to vary or to have fixed values while all other atoms are minimized. We can formally establish that the expressive power of minimal models is properly increased in the presence of varying atoms. In spite of this, we show how parallel circumscription can be embedded into disjunctive logic programming in a relatively systematic fashion using a linear and faithful, but non-modular translation. This enables the conscious use of varying atoms in disjunctive logic programs --- leading to more elegant and concise problem representations in various domains.(Joint work with Emilia Oikarinen) -
24.9.2004 Matti Järvisalo:
**A Compact Reformulation of Propositional Satisfiability as Binary Constraint Satisfaction****Abstract:**A binary CSP encoding for the propositional satisfiability problem (SAT) is introduced. To our knowledge, our encoding is the first one that is linear in the number of variables, domain size and constraint size w.r.t. the size of the SAT instance. Nevertheless, our encoding has the same attractive properties as the well-known hidden variable encoding when comparing the performance of (i) the Davis-Putnam procedure with the Forward Checking and Maintaining Arc Consistency algorithms on the encoding, and (ii) the local search methods WalkSAT and GSAT with the Min Conflicts algorithm. Moreover, considering 2-SAT, it is shown that a generalisation of Papadimitriou's Random Walk algorithm has a quadratic expected running time on our encoding.(Joint work with Ilkka Niemelä) - 27.9.2004 Dr. Jan Willemson (University of Tartu):
**Game Theoretic Methods in Data Security****Abstract:**Game theoretic methods can be used to study the mechanisms behind several global-scale data security problems (spam mail, denial of service attacks, viruses etc). The attacking and defending parties are considered as antagonistic populations, for which the laws of evolutionary game theory, expressed in the form of Volterra equations, can be applied. The study of these laws enables to predict data security risks and to analyze their dynamics. The results can be applied to estimate practical data security risks for better planning the protection methods and for reasonably managing their costs. In this talk we try to view several global data security issues from the viewpoint of (evolutionary) game theory and set some related research goals. - 1.10.2004 Misa Keinänen:
**Obtaining Memory-Efficient Solutions to BESs****Abstract:**The talk is concerned with memory-efficient solution techniques for Boolean fixed-point equations. We show how certain structures of fixed-point equation systems, often encountered in solving verification problems, can be exploited in order to substantially improve the performance of fixed-point computations. Also, we investigate the space complexity of the problem of solving Boolean equation systems, showing a NL-hardness result. A prototype of the proposed technique has been implemented and experimental results on a series of protocol verification benchmarks are reported. - 8.10.2004 Emil Falck:
**Balanced Data Gathering in Energy-Constrained Sensor Networks****Abstract:**The talk is concerned with the problem of gathering data from a wireless multi-hop network of energy-constrained sensor nodes to a common base station. Specifically, we aim to balance the total amount of data received from the sensor network during its lifetime against a requirement of sufficient coverage for all the sensor locations surveyed. Our main contribution lies in formulating this balanced data gathering task and in studying the effects of balancing. We give an LP network flow formulation and present experimental results on optimal data routing designs also with impenetrable obstacles between the nodes. We then proceed to consider the effect of augmenting the basic sensor network with a small number of auxiliary relay nodes with less stringent energy constraints. We present an algorithm for finding approximately optimal placements for the relay nodes, given a system of basic sensor locations, and compare it with a straightforward grid arrangement of the relays. - 15.10.2004 Tommi Syrjänen:
**Cardinality Constraint Logic Programs****Abstract:**We define the class of cardinality constraint logic programs and provide a formal stable model semantics for them. The class extends normal logic programs by allowing the use of cardinality constraints and conditional literals. We identify a decidable subset, omega-restricted programs, of the class. We show how the formal semantics can be extended to allow the use of evaluated function symbols, such as arithmetic built-in operators. The omega-restricted cardinality constraint programs have been implemented in the Smodels system. - 15.10.2004 Erkki Ruponen:
**Integration Testing Exit Criteria in a Symbian OS Based Software Product** (Master's thesis presentation)**Abstract:**Symbian operating System's characteristics and Symbian C++ dialect place special requirements e.g. for integration testing in Symbian environment. The way of testing and the automation of testing are in central roles in integration testing. The objective of this master's thesis is to specify, what aspects must be considered in integration testing of Symbian modules and how the integration testing should be performed and completed in Symbian environment. Symbian version of Metrowerks CodeTest software analysis tool is evaluated and ATS, STIF Test Framework and test module (dll) are well suited for integration testing in Symbian environment. CodeTest® tool provides comprehensive features for test coverage of code, memory and performance testing of Software modules in Symbian OS environment. However it needs further development in order to be able to test memory management and performance in hardware with all kinds of Symbian modules. - 5.11.2004 Docent Kimmo Varpaaniemi:
**Stubborn Sets for Priority Nets****Abstract:**Partial order methods, such as the stubborn set method and the priority method, reduce verification effort by exploiting irrelevant orders of events. We show how the stubborn set method can be applied to priority nets. By applicability we mean that for a given priority net, the stubborn set method constructs a reduced reachability graph that sufficiently represents the full reachability graph of the priority net. This work can also be considered as a combination of the stubborn set method and the priority method, to be used in ``complete or moderated incomplete'' verification when the original models are unprioritised nets. - 12.11.2004 Timo Latvala:
**Simple Bounded LTL Model Checking****Abstract:**Bounded Model Checking (BMC) is a technique for finding violations of temporal logic specifications (usually LTL) in finite state systems. The method works by mapping the problem "Does the system have a counterexample of length k to the specification?" to the propositional satisfiability problem (SAT). The efficiency of BMC is in part dependent on how efficient the encoding of the problem to SAT is. We present an efficient encoding which is linear in the size of the system, the specification, and the bound k. The encoding has been implemented in the NuSMV model checker and we compare our encoding against previously presented ones. - 19.11.2004 Sven Laur:
**Additive Conditional Disclosure of Secrets****Abstract:**We propose the first two-round information-theoretically secure $\binom{m}{1}$-oblivious transfer protocol that works on groups of composite order and is secure in the standard model, such that the Receiver just sends a homomorphically encrypted index. Based on this protocol, we improve the conditional disclosure of secrets transformation that ``compiles'' many standard two-round protocols, private in the semi-honest model, to two-round protocols, private in the malicious model. Additionally, we consider only computationally secure $\binom{m}{1}$-oblivious transfer schemes that achieve higher communicational efficiency and some intriguing connections with padding schemes.(Joint work with Helger Lipmaa) - 26.11.2004 Dr. Aristides Gionis (HIIT, Basic Research Unit):
**Clustering Aggregation****Abstract:**We consider the following problem: given a set of clusterings, find a clustering that agrees as much as possible with the given clusterings. This problem, clustering aggregation, appears naturally in various contexts. For example, clustering categorical data is an instance of the problem: each categorical variable can be viewed as a clustering of the input rows. Moreover, clustering aggregation can be used as a meta-clustering method to improve the robustness of clusterings. The problem formulation does not require a priori information about the number of clusters, and it gives a natural way for handling missing values. We give a formal statement of the clustering-aggregation problem, we discuss related work, and we suggest a number of algorithms. For several of the methods we provide theoretical guarantees on the quality of the solutions. We also show how sampling can be used to scale the algorithms for large data sets. We give an extensive empirical evaluation demonstrating the usefulness of the problem and of the solutions.(Joint work with Panayiotis Tsaparas and Heikki Mannila)
[TCS main] [Contact Info] [Personnel] [Research] [Publications] [Software] [Studies] [News Archive] [Links] Latest update: 04 February 2007. Ilkka Niemelä |