In this article I want to share observations about some antipatterns found in the code of applications running on Spring. All of them one way or another popped up in the live code: either I came across them in existing classes, or I caught while reading the work of colleagues.
I hope you will be interested, and if after reading you acknowledge your "sins" and embark on the path of correction, I will be doubly pleased. I also urge you to share your own examples in the commentary, we will add the most curious and unusual to the post.
Autowired
The great and terrible @Autowired
is a whole era in Spring. You still can’t do without it when writing tests, but in the main code it (PMSM) is clearly superfluous. In several of my recent projects, he was not at all. For a long time we wrote like this:
@Component public class MyService { @Autowired private ServiceDependency; @Autowired private AnotherServiceDependency; //... }
The reasons why dependency injection through fields and setters are criticized have already been described in detail, in particular here . An alternative is implementation through the constructor. Following the link, an example is described:
private DependencyA dependencyA; private DependencyB dependencyB; private DependencyC dependencyC; @Autowired public DI(DependencyA dependencyA, DependencyB dependencyB, DependencyC dependencyC) { this.dependencyA = dependencyA; this.dependencyB = dependencyB; this.dependencyC = dependencyC; }
It looks more or less decent, but imagine that we have 10 dependencies (yes, yes, I know that in this case they need to be grouped into separate classes, but now it's not about that). The picture is no longer so attractive:
private DependencyA dependencyA; private DependencyB dependencyB; private DependencyC dependencyC; private DependencyD dependencyD; private DependencyE dependencyE; private DependencyF dependencyF; private DependencyG dependencyG; private DependencyH dependencyH; private DependencyI dependencyI; private DependencyJ dependencyJ; @Autowired public DI(/* ... */) { this.dependencyA = dependencyA; this.dependencyB = dependencyB; this.dependencyC = dependencyC; this.dependencyD = dependencyD; this.dependencyE = dependencyE; this.dependencyF = dependencyF; this.dependencyG = dependencyG; this.dependencyH = dependencyH; this.dependencyI = dependencyI; this.dependencyJ = dependencyJ; }
The code frankly, it looks monstrous.
And here, many forget that here too violinist @Autowired
not needed! If a class has only one constructor, then Spring (> = 4) will understand that dependencies need to be implemented through this constructor. So we can throw it away, replacing it with the Lombok @AllArgsContructor
. Or even better - on @RequiredArgsContructor
, without forgetting to declare all the necessary fields final
and receive a safe initialization of the object in a multi-threaded environment (provided that all dependencies are also safely initialized):
@RequiredArgsConstructor public class DI { private final DependencyA dependencyA; private final DependencyB dependencyB; private final DependencyC dependencyC; private final DependencyD dependencyD; private final DependencyE dependencyE; private final DependencyF dependencyF; private final DependencyG dependencyG; private final DependencyH dependencyH; private final DependencyI dependencyI; private final DependencyJ dependencyJ; }
Static methods in utility classes and enum functions
Bloody E often has the task of converting data carrier objects from one application layer to similar objects of another layer. To this end, utility classes with static methods like this are still used (recall, in the yard in 2019):
@UtilityClass public class Utils { public static UserDto entityToDto(UserEntity user) { //... } }
More advanced users who read smart books are aware of the magical properties of enumerations:
enum Function implements Function<UserEntity, UserDto> { INST; @Override public UserDto apply(UserEntity user) { //... } }
True, in this case, the call still occurs to a single object, and not to a component controlled by Spring.
Even more advanced guys (and girls) know about MapStruct , which allows you to describe everything in a single interface:
@Mapper(componentModel = "spring", unmappedTargetPolicy = ReportingPolicy.ERROR) public interface CriminalRecommendationMapper { UserDto map(UserEntity user); }
Now we get the spring component. It seems like a victory. But the devil is in the details, and it happens that victory becomes overwhelming. Firstly, the field names must be the same (otherwise hemorrhoids begin), which is not always convenient, and secondly, if there are any complex field transformations of the processed objects, additional difficulties arise. Well, mapstruct itself needs to be added depending.
And few people recall the old-fashioned, but nevertheless simple and working way to get a spring-driven converter:
import org.springframework.core.convert.converter.Converter; @Component public class UserEntityToDto implements Converter<UserEntity, UserDto> { @Override public UserDto convert(UserEntity user) { //... } }
The advantage here is that in another class, I just need to write
@Component @RequiredArgsConstructor public class DI { private final Converter<UserEntity, UserDto> userEnityToDto; }
and Spring alone will resolve everything!
Unnecessary Qualifier
Life case: the application works with two databases. Accordingly, there are two data sources ( java.sql.DataSource
), two transaction managers, two groups of repositories, etc. All this is conveniently described in two separate settings. This is for Postgre:
@Configuration public class PsqlDatasourceConfig { @Bean @Primary @ConfigurationProperties(prefix = "spring.datasource.psql") public DataSource psqlDataSource() { return DataSourceBuilder.create().build(); } @Bean public SpringLiquibase primaryLiquibase( ProfileChecker checker, @Qualifier("psqlDataSource") DataSource dataSource ) { boolean isTest = checker.isTest(); SpringLiquibase liquibase = new SpringLiquibase(); liquibase.setDataSource(dataSource); liquibase.setChangeLog("classpath:liquibase/schema-postgre.xml"); liquibase.setShouldRun(isTest); return liquibase; } }
And this is for DB2:
@Configuration public class Db2DatasourceConfig { @Bean @ConfigurationProperties(prefix = "spring.datasource.db2") public DataSource db2DataSource() { return DataSourceBuilder.create().build(); } @Bean public SpringLiquibase liquibase( ProfileChecker checker, @Qualifier("db2DataSource") DataSource dataSource ) { boolean isTest = checker.isTest(); SpringLiquibase liquibase = new SpringLiquibase(); liquibase.setDataSource(dataSource); liquibase.setChangeLog("classpath:liquibase/schema-db2.xml"); liquibase.setShouldRun(isTest); return liquibase; } }
Since I have two databases, for the tests I want to roll two separate DDL / DML on them. Since both configurations are loaded at the same time when the application is up, if I @Qualifier
, then Spring will lose its target designation and, at best, will fail. It turns out that the @Qualifier
cumbersome and prone to nonsense, but without them it does not work. To break the deadlock, you need to realize that the dependency can be obtained not only as an argument, but also as a return value, by rewriting the code like this:
@Configuration public class PsqlDatasourceConfig { @Bean @Primary @ConfigurationProperties(prefix = "spring.datasource.psql") public DataSource psqlDataSource() { return DataSourceBuilder.create().build(); } @Bean public SpringLiquibase primaryLiquibase(ProfileChecker checker) { boolean isTest = checker.isTest(); SpringLiquibase liquibase = new SpringLiquibase(); liquibase.setDataSource(psqlDataSource()); // <----- liquibase.setChangeLog("classpath:liquibase/schema-postgre.xml"); liquibase.setShouldRun(isTest); return liquibase; } } //... @Configuration public class Db2DatasourceConfig { @Bean @ConfigurationProperties(prefix = "spring.datasource.db2") public DataSource db2DataSource() { return DataSourceBuilder.create().build(); } @Bean public SpringLiquibase liquibase(ProfileChecker checker) { boolean isTest = checker.isTest(); SpringLiquibase liquibase = new SpringLiquibase(); liquibase.setDataSource(db2DataSource()); // <----- liquibase.setChangeLog("classpath:liquibase/schema-db2.xml"); liquibase.setShouldRun(isTest); return liquibase; } }
javax.inject.Provider
How to get a bean with prototype scope? I often came across this
@Component @Scope(SCOPE_PROTOTYPE) @RequiredArgsConstructor public class ProjectBuilder { private final ProjectFileConverter converter; private final ProjectRepository projectRepository; //... } @Component @RequiredArgsConstructor public class PrototypeUtilizer { private final Provider<ProjectBuilder> projectBuilderProvider; void method() { ProjectBuilder freshBuilder = projectBuilderProvider.get(); } }
It would seem that everything is fine, the code works. However, in this barrel of honey there is a fly in the ointment. We need to drag one more javax.inject:javax.inject:1
dependency, which was added to Maven Central exactly 10 years ago and has never been updated since then.
But Spring has long been able to do the same without third-party addictions! Just replace javax.inject.Provider::get
with org.springframework.beans.factory.ObjectFactory::getObject
and everything works the same way.
@Component @RequiredArgsConstructor public class PrototypeUtilizer { private final ObjectFactory<ProjectBuilder> projectBuilderFactory; void method() { ProjectBuilder freshBuilder = projectBuilderFactory.getObject(); } }
Now we can, with a clear conscience, cut javax.inject
from the list of dependencies.
Using strings instead of classes in settings
A common example of connecting Spring Data repositories to a project:
@Configuration @EnableJpaRepositories("com.smth.repository") public class JpaConfig { //... }
Here we explicitly prescribe the package that will be viewed by Spring. If we allow a little extra naming, then the application will crash at startup. I would like to detect such stupid errors in the early stages, in the limit - right during the editing of the code. The framework goes towards us, so the code above can be rewritten:
@Configuration @EnableJpaRepositories(basePackageClasses = AuditRepository.class) public class JpaConfig { //... }
Here AuditRepository
is one of the package repositories that we will be viewing. Since we indicated the class, we will need to connect this class to our configuration, and now typos will be detected directly in the editor or, at worst, when building the project.
This approach can be applied in many similar cases, for example:
@ComponentScan(basePackages = "com.smth")
turns into
import com.smth.Smth; @ComponentScan(basePackageClasses = Smth.class)
If we need to add some class to a dictionary of the form Map<String, Object>
, then this can be done like this:
void config(LocalContainerEntityManagerFactoryBean bean) { String property = "hibernate.session_factory.session_scoped_interceptor"; bean.getJpaPropertyMap().put(property, "com.smth.interceptor.AuditInterceptor"); }
but it is better to use an explicit type:
import com.smth.interceptor.AuditInterceptor; void config(LocalContainerEntityManagerFactoryBean bean) { String property = "hibernate.session_factory.session_scoped_interceptor"; bean.getJpaPropertyMap().put(property, AuditInterceptor.class); }
And when there is something like
LocalContainerEntityManagerFactoryBean bean = builder .dataSource(dataSource) .packages( //... ) .persistenceUnit("psql") .build();
it is worth noting that the packages()
method is overloaded and use the classes:
Do not put all beans in one package
I think in many projects on Spring / Spring Booth you saw a similar structure:
root-package | \ repository/ entity/ service/ Application.java
Here Application.java
is the root class of the application:
@SpringBootApplication @EnableJpaRepositories(basePackageClasses = SomeRepository.class) public class Application { public static void main(String[] args) { SpringApplication.run(Application.class, args); } }
This is the classic microservice code: the components are arranged in folders according to the purpose, the class with the settings is at the root. While the project is small, then everything is fine. As the project grows, fat packages appear with dozens of repositories / services. And if the project remains a monolith, then Gd with them. But if the task arises to divide the ragged application into parts, then the questions begin. Having experienced this pain once, I decided to take a different approach, namely to group classes by their domain. The result is something like
root-package/ | \ user/ | \ repository/ domain/ service/ controller/ UserConfig.java billing/ | \ repository/ domain/ service/ BillingConfig.java //... Application.java
Here, the user
package includes subpackages with classes responsible for user logic:
user/ | \ repository/ UserRepository.java domain/ UserEntity.java service/ UserService.java controller/ UserController.java UserConfig.java
Now in UserConfig
you can describe all the settings associated with this functionality:
@Configuration @ComponentScan(basePackageClasses = UserServiceImpl.class) @EnableJpaRepositories(basePackageClasses = UserRepository.class) class UserConfig { }
The advantage of this approach is that, if necessary, modules can be more easily allocated to separate services / applications. It is also useful if you are going to modularize your project by adding module-info.java
, hiding utility classes from the outside world.
That's all, I hope, my work has been useful to you. Describe your antipatterns in the comments, we will sort it out together :)