# PASCO 2017

## Kaiserslautern, Germany, July 23-24, 2017.

The 8th International Workshop on Parallel Symbolic Computation (PASCO) is the latest instance in a series of workshops dedicated to the *promotion and advancement of parallel algorithms and software in all areas of symbolic mathematical computation.*

## When and Where

PASCO 2017 will be co-located with ISSAC 2017 at the Technical University of Kaiserslautern, running just before ISSAC (July 25-28, 2017). Local information can be found on the ISSAC pages here and registration is open now.

## Invited Speakers

- Paolo Bientinesi, RWTH Aachen, Germany

Title:*Compiling linear algebra expressions to high-performance code*

Abstract: Vectors, matrices and tensors are the mathematical objects universally used to describe scientific phenomena, engineering processes, and numerical algorithms. By contrast, processors only operate with scalars and small arrays, and do not understand the language and the rules of linear algebra. Because of this mismatch, any linear algebra expression has to be translated in terms of the instructions supported by the specific target processor. Over the course of many years, the linear algebra community has put tremendous effort in the identification, standardization, and optimization of a rich set of relatively simple computational kernels--such as those included in the BLAS and LAPACK libraries--that provide the necessary building blocks for just about any linear algebra computation. The initial--daunting--task has thus been reduced to the decomposition of a target linear algebra expression in terms of said building blocks; we refer to this task as the "Linear Algebra Mapping Problem" (LAMP). However, LAMP is itself an especially challenging problem, requiring knowledge in high-performance computing, compilers, and numerical linear algebra. In this talk we present the problem, we give an overview of the solutions provided by several programming languages and computing environments (such as Julia, Matlab, R, ...), and introduce Linnea, a compiler to solve the general form of LAMP. As shown through a set of test cases, Linnea's results are comparable with those obtained by a human expert. - Florent Hivert, Laboratoire de Recherche en Informatique (LRI), Paris

Title:*High Performance Computing Experiments in Enumerative and Algebraic Combinatorics*

Abstract: In this talk, I will report on several experiments around large scale enumerations in enumerative and algebraic combinatorics.

In a first part, I'll present a small framework implemented in Sagemath allowing to perform map/reduce like computations on large recursively defined sets. Though it doesn't really qualify as HPC, it allowed to efficiently parallelize a dozen of experiments ranging from Coxeter group and representation theory of monoids to the combinatorial study of the C3 linearization algorithm used to compute the method resolution order (MRO) in script language such as Python and Perl.

In a second part, I'll describe a methodology used to achieve large speedups in several enumeration problems. Indeed, in many combinatorial structures (permutations, partitions, monomials, young tableaux), the data can be encoded as a small sequence of small integers that can often efficiently be handled by a creative use of vector instructions. Through the challenging example of numerical monoids, I will then report on how Cilkplus allows for a extremely fast parallelization of the enumeration. Indeed, we have been able to enumerate sets with more that 10^15 elements on a single multicore machine.

This is joint work with Jean Fromentin.

- Marc Moreno Maza, Western University, London, Ontario

Title:*Multithreaded programming on the GPU: pointers and hints for the computer algebraist*

Abstract: This 2-hour*tutorial*will attempt to cover the key principles that computer algebraists interested in GPU programming should have in mind. The first hour will introduce the basics of GPUs' architecture and the CUDA programming model: no preliminary experience with GPU programming will be assumed. In the second hour, we shall discuss the recent developments in terms of GPUs' architecture and programming models as well as techniques for improving code performance.

## Important Dates

**Deadlines extended to:**

- Abstract due Monday April 24th (23:59 PST).
- Papers due Sunday April 30th (23:59 PST).
- Decisions to authors Wednesday May 31st (23:59 PST).
- Final papers due Wednesday June 21st (23:59 PST).

## Paper Submission

- Papers must be written in English.
- Papers must contain original research and may not duplicate work published or submitted for publication elsewhere.
- All papers should use the ACM Master Article Template (either LaTeX or Word) available here, with option
`sigconf`of document-class`acmart`, i.e.`\documentclass[sigconf]{acmart}`.**NB: This is a new template as of Spring 2017. Make sure to pick up the latest version**. - Either extended abstracts (2 pages) or full papers (up to 10 pages) may be submitted.
- All papers should be submitted through Easychair .
- For submission deadlines see the section above.

A list of accepted papers is here.

## Final Paper Submission and Proceedings

The final version of your pasco 2017 paper is due this http://www.acm.org/publications/proceedings-template The page limit for full papers is
We have created a second conference "PASCO_PROC_2017" at
If your paper is just one file, a .tex file, please upload the .tex file
to easy chair at:
If your paper has more than one file, please create a .zip archive of your paper,
including a self-contained set of Latex source files (and pictures or other included graphics) as well as a .pdf of the paper, and Email it to both of us at
ACM will require you to fill out the ACM copyright form so that the paper can be put in the ACM digital library. Only one author has to do this. PASCO 2017 proceedings will be published by ACM through its ICPS programme, with the allocated ISBN 978-1-4503-5288-8. |

## Organising Committee

- General chair: Hans-Wolfgang Loidl, Heriot-Watt University, Edinburgh.
- Programme Committee chairs: Michael Monagan, Simon Fraser University, Canada and Jean-Charles Faugère, INRIA (Paris-Rocquencourt Research Center)
- Local chairs: Claus Fieker and Wolfram Decker, Technical University of Kaiserslautern.
- Treasurer: Tommy Hofmann, Technical University of Kaiserslautern.
- Publicity chair: Alexander Konovalov, University of St Andrews.

## Programme Committee

- Russel Bradford, University of Bath, England
- Jean-Guillaume Dumas, Université Grenoble, France
- Jean-Charles Faugere (co-chair), INRIA Paris-Rocquencourt, France
- Joachim von zur Gathen, Universität Bonn, Germany
- Pascal Giorgi, Université Montpellier, France
- Jeremy Johnson, Drexel University, USA,
- Erich Kaltofen, North Carolina State University, USA
- Herbert Kuchen, University of Münster, Germany
- Marc Moreno Maza, Western University, Canada
- Michael Monagan (co-chair), Simon Fraser University, Canada
- Clement Pernet, Université Grenoble, France
- Daniel Roche, US Naval Academy, Annapolis, USA
- Wolfgang Schreiner, Johannes Kepler University, Linz, Austria
- Allan Steel, University of Sydney, Australia
- Emmanuel Thome, INRIA Nancy, France

## Topics of Interest

Specific topics include, but are not limited to:

- Design and analysis of parallel algorithms for computer algebra
- Practical parallel implementation of symbolic or symbolic-numeric algorithms
- Design of high-performance algebraic packages and systems
- Data representation and distributed data-structures
- Considerations for modern hardware and hardware acceleration technologies (multi-cores, GPUs, FPGAs)
- Cache complexity and cache-oblivious algorithms for computer algebra
- Parallel implementations of computer algebra algorithms on GPUs
- Parallel algorithm implementation and performance tuning
- Compile-time and run-time techniques for automating optimization and platform adaptation of computer algebra algorithms
- Applications of high-performance computer algebra in theorem proving, cryptography, computational biology, number theory, group theory, satisfiability checking, SAT solving, etc.