R2DBC Arabba-RELEASE - a new look at reactive programming for SQL







Congratulations to Habr with the release of R2DBC version of Arabba-RELEASE ! This is the very first stable release of the project.







R2DBC (Reactive Relational Database Connectivity) is an open source project dedicated to reactive programming for SQL. R2DBC developers have been preparing the first version of the specification for two whole years! In this hub we will talk about why we need R2DBC, and what is happening in the project right now.







In the community of Java developers, the attitude towards reactivism is traditionally extremely ambiguous. Reactive programming provides a significant increase in application scalability thanks to the concept of pipelined data processing. But it also means an increased entry threshold, as well as a code that looks completely different than the "traditional" imperative code.







The key ingredient in reactive flows is non-blocking performance. At the same time, using blocking components inside the reactive system, you can easily break everything, because a very limited number of flows are used in reactive runtime.







Reactive SQL database applications typically use JDBC, which is the standard for the JVM ecosystem. In turn, JDBC makes it possible to use frameworks that are built on it and provide abstractions that allow you to work at a higher level and not be distracted by the technical aspects of communicating with the database. JDBC is a blocking API. If you use JDBC in a reactive application, you have to transfer blocking calls to ThreadPool. Alternatively, you can not use JDBC, but directly work with specific database drivers. But, anyway, we are facing a dilemma:









Both options put developers in an uncomfortable position, because none of them solves the problem completely. Therefore, R2DBC was created to create a standardized reactive API for SQL. It consists of specification and API. Both of these components describe how to make an R2DBC-compatible driver and what framework developers can expect from R2DBC in terms of functionality and behavior. R2DBC provides the foundation for plug-in drivers.







Dependencies



R2DBC uses Java 8 and requires an external dependency on Reactive Streams, because Java 8 does not have a native reactive programming API. Starting with Java 9, Reactive Streams became part of Java itself and the Flow API appeared, so future versions of R2DBC will be able to migrate to the Flow API as soon as they switch to Java 9, which will allow R2DBC to become a specification without dependencies on external libraries.







R2DBC structure



R2DBC defines the behavior and set of core interfaces that are used to integrate between the R2DBC driver and the code that accesses it. Here they are:









In addition to these interfaces, R2DBC comes with a set of exception categories and metadata interfaces that provide detailed information about the driver and the database.







The main driver entry point is ConnectionFactory



. She creates a Connection



, which allows you to communicate with the database.







R2DBC uses the usual ServiceLoader



from Java to find drivers lying on the classpath. In the application, ConnectionFactory



can be obtained from the URL:







 ConnectionFactory connectionFactory = ConnectionFactories .get("r2dbc:h2:mem:///my-db?DB_CLOSE_DELAY=-1"); public interface ConnectionFactory { Publisher<? extends Connection> create(); ConnectionFactoryMetadata getMetadata(); }
      
      





A Connection



request starts a non-blocking process that connects to the underlying database. Immediately after the connection, this connection is used to monitor the transactional state, or simply to start the Statement



:







 Flux<Result> results = Mono.from(connectionFactory.create()).flatMapMany(connection -> { return connection .createStatement("CREATE TABLE person (id SERIAL PRIMARY KEY, first_name VARCHAR(255), last_name VARCHAR(255))") .execute(); });
      
      





Let's look at the Connection



and Statement



interfaces:







 public interface Connection extends Closeable { Publisher<Void> beginTransaction(); Publisher<Void> close(); Publisher<Void> commitTransaction(); Batch createBatch(); Publisher<Void> createSavepoint(String name); Statement createStatement(String sql); boolean isAutoCommit(); ConnectionMetadata getMetadata(); IsolationLevel getTransactionIsolationLevel(); Publisher<Void> releaseSavepoint(String name); Publisher<Void> rollbackTransaction(); Publisher<Void> rollbackTransactionToSavepoint(String name); Publisher<Void> setAutoCommit(boolean state); Publisher<Void> setTransactionIsolationLevel(IsolationLevel level); Publisher<Boolean> validate(ValidationDepth depth); } public interface Statement { Statement add(); Statement bind(int index, Object value); Statement bind(String name, Object value); Statement bindNull(int index , Class<?> type); Statement bindNull(String name, Class<?> type); Publisher<? extends Result> execute(); }
      
      





By running this Statement



, the output is a Result



. It contains information about either the number of rows changed in the table or the rows themselves:







 Flux<Result> results = …; Flux<Integer> updateCounts = results.flatMap(Result::getRowsUpdated);
      
      





Strings can be processed streaming. In other words, the lines appear immediately as the driver received and decrypted this line at the protocol level. To process the strings, you need to write some kind of conversion function that will be applied to each decrypted Row. This function can extract an arbitrary number of values ​​and return either scalars or a materialized object:







 lux<Result> results = …; Flux<Integer> updateCounts = results.flatMap(result -> result.map((row, rowMetadata) -> row.get(0, Integer.class)));
      
      





Let's look at the Result



and Row



interfaces:







 public interface Result { Publisher<Integer> getRowsUpdated(); <T> Publisher<T> map(BiFunction<Row, RowMetadata, ? extends T> mappingFunction); } public interface Row { Object get(int index); <T> T get(int index, Class<T> type); Object get(String name); <T> T get(String name, Class<T> type); }
      
      





R2DBC is based on Reactive Streams, therefore, it is worth using a reactive library to properly process the results of R2DBC. Naked Publisher



practically unsuitable for this. All of the code examples in this article use Project Reactor.







Specification application



R2DBC determines how types between the database and the JVM should be converted, how specific R2DBC implementations should behave. It has compatibility guidelines that allow a specific implementation to go through TCK. R2DBC was largely inspired by JDBC. That is why, using R2DBC may seem quite ordinary and familiar to you. The specification covers the following topics:









The specification also allows you to implement extensions that can be optionally implemented by drivers in which it is possible to support these extensions.







Ecosystem



R2DBC started in the spring of 2018 as a specification for the Postgres driver. Immediately after the initial review, the R2DBC working group realized what contribution R2DBC could make in this area in general, and therefore it grew into a whole standard specification. Several projects supported these ideas by creating drivers and libraries for use with R2DBC:







Drivers





Libraries





To make your own R2DBC driver, in most cases you need to completely implement a network protocol, since most JDBC drivers use SocketInputStream



and SocketOutputStream



. All such drivers are very young projects, and you need to use them with great care. Oracle recently, at the Code One conference, talked about plans for the OJDBC 20 driver, right after the news of the cessation of work on ADBA. The Oracle OJDBC20 driver will come with several reactive extensions inspired by work on ADBA and feedback from the R2DBC working group, so this driver can be used in reactive applications.







Several database vendors are also interested in creating R2DBC drivers.







The same is true for frameworks. Projects like R2DBC Client, kotysa, and Spring Data R2DBC all allow you to use R2DBC in applications. Other libraries, such as jOOQ, Micronaut, and Hibernate Rx, are already aware of the existence of R2DBC and also want to someday integrate with it.







What to read, where to pick up?



You can start with this:









R2DBC is a release train Arabba-RELEASE



, consisting of modules:









Artifacts of this release:





When using Maven, you need to add pom.xml



following line:







 <dependencyManagement> <dependencies> <dependency> <groupId>io.r2dbc</groupId> <artifactId>r2dbc-bom</artifactId> <version>Arabba-RELEASE</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> <dependencies> <dependency> <groupId>io.r2dbc</groupId> <artifactId>r2dbc-postgresql</artifactId> </dependency> <dependency> <groupId>io.r2dbc</groupId> <artifactId>r2dbc-pool</artifactId> </dependency> </dependencies>
      
      





What's next?



R2DBC Arabba-RELEASE is the first stable release of the open standard. The development of the specification does not stop there: stored procedures, transaction extensions, specification of database events (such as Postgres Listen / Notify) are just a few of the topics planned for the next version, R2DBC 0.9.







Join the community - this will not only monitor the development, but also participate in it!








All Articles