On Day 1 of the conference, thanks to our awesome sponsor Dwango, all sessions (both room A and room B) will be streamed real-time on NicoNico Live.
Room A |
Room B |
|
---|---|---|
9:00 - 9:55 |
Registration Open | |
9:55 - 10:15 |
S-1 Opening Remarks Kota Mizushima, ScalaMatsuri Committee |
- |
10:15 - 11:15 |
S-2 Keynote Address - The Evolution of Scala Martin Odersky, EPFL |
- |
11:25 - 12:05 |
A-1 Eugene Yokota, Typesafe |
B-1 GitBucket: Perfect Github clone by Scala Naoki Takezoe, BizReach, Inc. |
12:10 - 12:50 |
A-2 Fifty Rapture One-Liners in Forty Minutes Jon Pretty |
B-2 Xitrum Web Framework Live Coding Demos Takeharu.Oshida&Ngoc Dao, Mobilus Corporation |
12:50 - 14:20 |
S-3 Lightning-Talk Session with Lunch |
- |
14:20 - 15:00 |
A-3 Yevgeniy Brikman, LinkedIn |
B-3 Introduction to SparkSQL and Catalyst Takuya Ueshin, Nautilus Technologies, Inc. |
15:05 - 15:45 |
A-4 Scalable Generator: Using Scala in SIer Business Yugo Maede, TIS Inc. |
B-4 Solid and Sustainable Development in Scala Kazuhiro Sera, M3, Inc. |
15:55 - 16:35 |
A-5 Building a Unified "Big Data" Pipeline in Apache Spark Aaron Davidson, Databricks |
B-5 Takaya Tsujikawa, Hatena Co., Ltd. |
16:35 - 17:15 |
S-4 Business Meeting presented by Typesafe, and Coffee Break |
- |
17:15 - 17:55 |
A-6 Getting started with Scalding, Storm and Summingbird. Yoshimasa Niwa, Twitter, Inc. |
B-6 The Trial and Error in Releasing GREE Chat. GREE's First Scala Product. Takayuki Hasegawa and Shun Ozaki, GREE, Inc. |
18:05 - 18:45 |
A-7 Scarab: SAT-based Constraint Programming System in Scala Takehide Soh, ISTC, Kobe University |
B-7 Taro L. Saito, Treasure Data, Inc. |
18:50 - 19:30 |
A-8 Japan's national sport and Scala Takuya Fujimura, DWANGO MOBILE Co., Ltd. |
B-8 What's a macro?: Learning by Examples Takako Shimamoto, BizReach, Inc |
19:35 - 20:15 |
A-9 (When I moved) From Ruby to Scala: There's more than two ways to do it todesking, Marverick.,inc. |
- |
20:20 - 22:20 |
S-5 After Party featuring Lightning-Talks. |
- |
Scala is one of the relatively few languages that escaped from an academic research lab into widespread industrial usage. This was not something that was planned 10 years ago, when Scala was first announced. In this talk I'll give an overview of the development of Scala from its beginning to the near future, addressing some questions one might ask when looking back: How did motivations and expectations change? In hindsight, what were the important achievements? What was learned along the way?
Martin Odersky is professor of computer science at EPFL in Lausanne, Switzerland, co-founder of Typesafe, and creator of the Scala language. For most of his career Martin has worked on the fusion of functional and object-oriented programming. He believes the two paradigms are two sides of the same coin, to be unified as much as possible. To support this claim, he has worked on a number of language designs, from Pizza to GJ to Scala. He wrote javac, the compiler used by the majority of today's Java programmers and scalac, the compiler used by the fast-growing Scala community. He authored "Programming in Scala", the best selling book on Scala and pioneered two massive open online courses on functional and reactive programming. Previously, he held positions at IBM Research, Yale University, the University of Karlsruhe and the University of South Australia.
sbt is an interactive build tool used widely in Scala community. I will first demonstrate the basic usage of sbt, then go over the key concepts that make up sbt.
Next, I will discuss upcoming features and future visions for sbt 1.0.
Eugene Yokota (@eed3si9n) is a software developer active on and off work. After flying to Stevens Institute of Technology, Eugene started his career in New Jersey writing enterprise financial application in Delphi and C#. On nights and weekends, however, he hacked on various sbt plugins and other hobby projects such as scalaxb, treehugger, and 'learning Scalaz' series. He has also provided a number of Japanese translations to blog articles and Scala Documentation guides. In 2014, Eugene joined Typesafe as a core developer of sbt.
GitBucket is a Github clone using Scala. The most important feature of GitBucket is "Easy Installation". It requires only JavaVM. You can start to use GitBucket only one command and it also provides SSH access. In this session, I want to explain core features of GitBucket, what technologies are used in GitBucket and future roadmap.
I'm a Scala programmer at BizReach, Inc who is working for the new service using Scala. I'm author of GitBucket which is the perfect Github clone by Scala and one of committers of Scalatra which is the simple and powerful web framework in Scala.
Xitrum project was started by Ngoc Dao in 2010.
Xitrum 2.x was introduced at the Lightning Talk at Scala Conference in Japan 2013
http://www.slideshare.net/ngocdaothanh/xitrum-scalaconfjp2013. This year,
we would like to introduce Xitrum 3.x with its new features, in the live coding
style:
Core features: auto route collecting, WebSocket, SockJS, CORS, i18n etc.
Oshida and Ngoc are working at Mobilus, a venture that provides solutions for mobiles. At Mobilus, Xitrum is being used in projects like realtime chat solutions, for many game companies, telecom companies, education companies in Japan. Outside Mobilus, Xitrum is also being used at companies in Korea and Russia.
Here's the showdown you've been waiting for: Node.js vs Play Framework. Both are popular open source projects that are built for developer productivity, asynchronous I/O, and the real time web. But which one is easier to learn, test, deploy, debug, and scale? Should you pick Javascript or Scala? The Google v8 engine or the JVM? NPM or Ivy? Grunt or SBT? Two frameworks enter, one framework leaves.
LinkedIn, the world's largest professional network, with over 300 million members in 200 countries, is now running on top of the Play Framework. Yevgeniy (Jim) Brikman led the Play project at LinkedIn, wrote the core infrastructure code, and trained developers to use Play, Scala, and functional programming. Along the way, he learned a lot of lessons about productivity, reliability, and performance, which he loves to share through blog posts and talks.
Apache Spark attracts a lot of attention as a faster distributed processing engine than Hadoop MapReduce, written in Scala. I will introduce SparkSQL, one of Apache Spark components, and Catalyst used in SparkSQL.
SparkSQL is a project to execute SQL on Apache Spark, develped by mainly Databricks, Inc. It parses SQL and builds logical execution plan, physical execution plan, and then converts it to RDDs(Resilient Distributed Datasets). The logical execution plan is optimized by rule-based approach. These planning or optimization framework is provided by Catalyst.
A programmer working at Nautilus Technologies, Inc. A Spark contributor.
We usually use Java, when we build web application.
We have been improving the efficiency of development in various ways so far, but we want to improve more.
I thought we needed to change our current style (=Java) and decided to engage with Scala for further evolution.
In order to solve problems when we using Scala, I have been developing a code generator for Play Framework and Slick.
It has the following features.
I work in the Strategy Technology Center of TIS Inc. (TIS is a system integrator in Japan). Our business covers broad spectrum in IT service field such as developing and operation mission-critical systems for the enterprise like bank, insurance, credit card, manufacture and so on. I have built in-house application framework and tools for development I have also taught these technologies to our engineers. Now, in order to spread Scala to them, I am evaluating the validity of Scala for our company.
Nowadays, Scala gets attention as a stable platform for asynchronous event-driven architectures or a pragmatic functional programming language. And I think not a few other people expect Scala better, refined and functional style flavored object-oriented programming. Some people may call it 'better Java' or something like that.
I believe that using Scala as a better OOP language can make our development more solid and safer. It's time for us to think about how to maintain and grow existing (and sometimes legacy) Scala applications without troubles sustainably.
In this talk, I'll offer good parts to practice friendly and solid coding style for programmers that are used to object-oriented programming.
A Scala enthusiast in Japan. ScalikeJDBC, Skinny Framework project lead. A web developer at M3, Inc.
As big data becomes a concern for more and more organizations, there is a need for both faster tools to process it and easier-to-use APIs. Apache Spark is a cluster computing engine written in Scala that addresses these needs through (1) in-memory computing primitives that let it run 100x faster than Hadoop and (2) concise, high-level, functional APIs in Scala, Java, and Python.
In this talk, we’ll demo the ability of Spark to unify a range of data processing techniques live by building a machine learning pipeline with 3 stages: ingesting JSON data into a SQL table; training a k-means clustering model; and applying the model to a live stream of tweets. Typically this pipeline might require a separate processing framework for each stage, but we can leverage the versatility of the Spark runtime to combine Shark, MLlib, and Spark Streaming and do all of the data processing in a single, short program. This allows us to reuse code and memory between the components, improving both development time and runtime efficiency.
This talk will be a fully live demo and code walkthrough where we’ll build up the application throughout the session, explain the libraries used at each step, and finally classify raw tweets in real-time.
Aaron Davidson is an Apache Spark committer and software engineer at Databricks. He has implemented Spark standalone cluster fault tolerance and shuffle file consolidation, and has helped in the design, implementation, and testing of Spark's external sorting and driver fault tolerance. He is also a contributor to the Tachyon in-memory distributed file system and has co-authored work on Highly Available Transactions in the Berkeley AMP Lab.
Mackerel, an application performance management service, provided by Hatena has adopted Scala and Play2 in server-side. I will introduce why did we choose Scala although we have used Perl over 10 years, how it effected the development flow and the product, and current development and operations of Mackerel.
Software engineer at Hatena, lead developer of Mackerel team.
We, Twitter are using Scala language for many purposes. Especially we use it for the real time job, asynchronous batch job to analysis datas to build our new services on top of our infrastructure.
In this presentation, I introduce that the tools and framework which underlining of these data processing, Scalding, Storm and Summingbird from the view point of user.
I'm a software engineer working at Twitter based on San Francisco. Currently mainly contributing to iOS application and also interested in various Scala projects.
We released a chat service named GREE Chat in June 2014.
This presentation is about our approach to solving some problems we have encountered when developing the backend system of our chat service.
Topics:
Joined GREE, Inc. in 2013 as a new graduate. Currently part of the the GREE Chat project as an engineer.
Since 2000, remarkable improvements have been made in the efficiency of solvers for propositional satisfiability testing (SAT). Such improvements of SAT solvers have enabled a programmer to develop SAT-based systems for planning, scheduling, and hardware/software verification. However, for a given problem, we usually need to develop a dedicated program that encodes it into SAT.
In this talk, we present Scarab, a SAT-based Constraint Programming System in Scala. The major design principle of Scarab is to provide an expressive, efficient, customizable, and portable workbench for SAT-based system developers. It provides a rich constraint modeling language on Scala and enables a programmer to rapidly specify problems and to experiment with different modelings. Scarab also provides a simple way to realize incremental solving, solution enumeration, native constraints, and dynamic addition and/or removal of constraints.
Scarab is implemented in Scala and consists of Constraint Programming Domain-Specific Language (DSL), SAT encoding module, and interface to the back-end SAT solvers. The current version of Scarab adopts Sat4j as a back-end SAT solver. The combination of Scarab and Sat4j makes it possible to develop portable SAT-based systems that run on any platform supporting Java.
The source code and information of Scarab is available in
http://kix.istc.kobe-u.ac.jp/~soh/scarab/.
Takehide Soh received Ms. of Engineering at Kobe University in 2006. After 2 years experience in Suntory Co., Ltd., he studied in the Graduate University for Advanced Studies (SOKENDAI) and got PhD. (Informatics) in 2011. Currently, he is working in Information Science and Technology Center of Kobe University as Assistant Professor. His research interests are SAT technology, Constraint Programming and their Applications.
Silk is a framework for building dataflows in Scala. In Silk users write data processing code with collection operators (e.g., map, filter, reduce, join, etc.). Silk uses Scala Macros to construct a DAG of dataflows, nodes of which are annotated with variable names in the program. By using these variable names as markers in the DAG, Silk can support interruption and resume of dataflows and querying the intermediate data. By separating dataflow descriptions from its computation, Silk enables us to switch executors, called weavers, for in-memory or cluster computing without modifying the code. In this talk, we will show how Silk helps you run data-processing pipelines as you write the code.
Taro L. Saito is a software engineer at Treasure Data, Inc. He received a Ph.D. of computer science at the University of Tokyo. Before joining Treasure Data, he had been working on genome sciences, database management systems and distributed computing as an assistant professor of the University of Tokyo.
Scala is already a practical language.
We need to stop writing introductions to Scala and asking whether it can be used in practice. Instead, we should be sharing our knowledge about best practices when designing, writing and deploying Scala apps.
In this session, I'll be talking about my team's experience using Scala and Play to develop the backend for a sports-related smartphone app, including what we considered at the design phase, what we achieved and problems we encountered. I want to pass on this know-how to anybody who is planning to, or just starting to, use Scala and Play in production.
Scala newbie. (I’ve been using Scala for about a year now) Manager & Programmer at DWANGO MOBILE Co., Ltd. I do a bit of everything, from coding to architecture design and service design. If you invite me to go for a drink, I'll never say no!
Since 2.10.0 Scala includes macros.
In this session, I think I would like to talk about something like the
following:
A Scala programmer at BizReach, Inc. And a GitBucket committer.
Ruby and Scala has same attributes: class-based object oriented language, integrated with functional paradigm.
But Ruby prefers to do all things dynamically to gain maximum flexibility,in contrast Scala do many things in compile-time to gain speed and safety.
In this session, we dive into two languages and learn exotic culture.
Developing adverting system(DSP) at Maverick., inc. with Scala and Vim.
On Day 2 we will host Japan's first ever Scala unconference. An unconference is a conference in which you, the attendees, make the rules! You decide what you want to discuss/learn about/hack on. We've never done this before so we're not sure exactly how it's going to turn out, but it should be a lot of fun! For more details, check out the unconference page.
The unconference will run from 10am to 5pm. (Venue opens at 9am.) We will provide breakfast and lunch.