oskar

About Oskar Rynkiewicz

Oskar is an all-round engineer with data science and software development skills. He takes interest in Big Data and has an aptitude for Machine Learning. Lately, he has been focused on harnessing Spark, Hadoop, and distributed systems. Over last 2 years, he acquired fluency in Python Data Science ecosystem tools. Having done 6 months internship as Research Engineer in a top university in Australia, he has the ability to address problems formulated with mathematics. He obtained his diploma from the French school of engineering IMT Atlantique, with specialization in Information Processing Systems. He has been studying Computer Science and Statistics, participating in numerous projects and collaborating with people from various nationalities and backgrounds. Three years outside his home country, Poland, allowed him to obtain a broad perspective and competencies in French and English. He is a generalist with a diverse skill set, always keen on learning and pushing his technical abilities.

Spark Streaming part 3: tools and tests for Spark applications

Whenever services are unavailable, businesses experience large financial losses. Spark Streaming applications can break, like any other software application. A streaming application operates on data from the real world, hence the uncertainty is intrinsic to the application's input. Testing is essential to discover as many software defects and as much flawed logic as possible before [...]

By |2019-06-19T21:48:08+00:00June 19th, 2019|Categories: Big Data, Data Engineering|Tags: , , , , |0 Comments

Spark Streaming part 2: run Spark Structured Streaming pipelines in Hadoop

Spark can process streaming data on a multi-node Hadoop cluster relying on HDFS for the storage and YARN for the scheduling of jobs. Thus, Spark Structured Streaming integrates well with Big Data infrastructures. A streaming data processing chain in a distributed environment will be presented. Cluster environment demands attention to aspects such as monitoring, stability, [...]

By |2019-05-28T22:12:42+00:00May 28th, 2019|Categories: Big Data, Data Engineering|Tags: , , , |0 Comments

Spark Streaming part 1: build data pipelines with Spark Structured Streaming

Spark Structured Streaming is a new engine introduced with Apache Spark 2 used for processing streaming data. It is built on top of the existing Spark SQL engine and the Spark DataFrame. The Structured Streaming engine shares the same API as with the Spark SQL engine and is as easy to use. Spark Structured Streaming [...]

By |2019-04-18T16:07:47+00:00April 18th, 2019|Categories: Big Data, Data Engineering|Tags: , , , , |3 Comments

Publish Spark SQL DataFrame and RDD with Spark Thrift Server

The distributed and in-memory nature of the Spark engine makes it an excellent candidate to expose data to clients which expect low latencies. Dashboards, notebooks, BI studios, KPIs-based reports tools commonly speak the JDBC/ODBC protocols and are such examples. Spark Thrift Server may be used in various fashions. It can run independently as Spark standalone [...]

By |2019-03-25T14:50:18+00:00March 25th, 2019|Categories: Big Data, Data Engineering|Tags: , , , , |1 Comment