EF Core automatic migration

In our way toward migrating an ASP.NET full framework application to ASP.NET Core 2.0 we encountered a significant change in EF Core 2.0 in comparison with EF 6.x.

 

EF 6.x

In EF 6.x we were tightly relying on automatic migrations in deployment operations. We never created or added a migration to the code-base. Instead we had turned on automatic migration and allowed data loss. So when new versions of the project were deployed on the server, all necessary changes on the database were done behind the scenes automatically.

I know that enabling automatic migration on a production server is not a good idea. But it was a constraint forced by our deployment team.

Entity Framework Core
Entity Framework Core

EF Core

Automatic migration has been removed from EF Core. This feature is not present in EF Core 2.0. There is no plan to add this feature to EF. Microsoft thinks it benefits is less than its drawbacks.

There are 2 methods in EF Core 2.0 related to migration. Database.Migrate() and Database.EnsureCreated(). Neither of them are a complete migration.

Migrate() does not add or create a migration. It only checks if any not-applied migrations exists or not. If yes, then updates the database based on them.

EnsureCreated() creates the database based on the models in the project. But it does not do this in the migration way. Actually no migrations are needed by this method. Disadvantage of this method is that a database created by it, can not be updated in future by any migrations. Indeed this method is added to EF to help people create projects fast in MVP style.

Conclusion

At the end, we decided to not having automatic migrations as EF 6.x. Everyone that creates a model is responsible to create migrations too. And never call EF command in the production server to update the database. Instead call the Migrate() method on each startup to take the database to the latest available migration.

A sample code would be like this:

 

Sample code inspired from here.

Migrating to ASP.NET Core 2.0

It is early September 2017. ASP.NET Core 2.0 has been out for a week or so. My computer has it. There is a project that has been started in early days of ASP.NET Core 1.0 beta days. That start day we did not used ASP.NET Core because of its beta nature. Instead we went with ASP.NET and OWIN.

Today that ASP.NET Core is mature enough, we are going to upgrade our project to ASP.NET Core 2.0 on .Net Core. Hopefully we can soon develop and run the project in Ubuntu in addition to Windows!

ASP.NET Core 2.0
ASP.NET Core 2.0

Personally I have been working with Visual Studio Code for last year so I am comfortable with it. But as other teammates are using Visual Studio 2017 so I start my work from Visual Studio 2017. In future I can use Visual Studio Code in non fundamental code sessions.

Target project is a solution consisted of some projects. Each project is responsible for a set of functionalities. Project Domain for basic entities, enums, dtos, etc. Core mainly for business logic and Web for technologies related to web. Domain project is in lowest level, so I started with it.

Name-space and Nuget changes aside I encountered my first challenge while porting a class that was inheriting IdentityUser. Old identity can accept a row of generic parameters while Identity Core has changed them significantly.

Old Identity User
Old Identity User
Identity Core User
Identity Core User

This story is going to be continued…

A good architecture for an ASP.NET Core application

From June 2016 that ASP.NET Core 1.0 released I have used it at least in 2 projects. Each of them learned me new points especially in project architecture including layering, DI, mapping and DTOs. While ago I wrote about my experience here. Now I am trying to enhance previous experiences and describe a project based on the enhanced structure.

 

1- Layers

One of most important decisions is project layers. Personally I do not like multiple layers, but here I choose to have 3 layers for a good reason. I want to hide database from presentation. I do not like Controllers or Web APIs be aware of internal structure of tables and fields. Because in this way:

  • Designing controller actions and Web APIs are easier as they do not have to know everything about internal table designs

  • Security is higher. As ASP.NET binding does not fill input parameters with data from user. Indeed as services are not aware of complete model (table) design, they can not bind it incorrect or malicious inputs of the user.

  • Avoiding dirty checking mechanism of ORMs. If you receive an entire db model, there is a change that Entity Framework detects it as a dirty object and tries to save it in database while you did not mean it.

  • Avoiding confusing mappings by have only needed properties

It is my suggested layering:

ASP.NET Core project layers
ASP.NET Core project layers
ASP.NET Core project layers[/caption]ASP.NET Core project layers[/caption]ASP.NET Core project layers[/caption]

 

User works with presentation layer. Presentation layer is aware of only service layer and transfer data to it via DTOs. Service layer in turn communicates with data access layer via database models. In other hands service layer isolates presentation and data access layers from each other. Data access layer contains anything that should not be visible to presentation layer.

Presentation layer can be supposed as a thin layer as it contains database models and DbContext only. Service layer contains repositories and all services. Presentation layer contains ASP.NET controllers, cshtml and css files.

 

2- Repository

Entity Framework’s DbContext itself is a repository itself. Normally there is no need to warp its Add method. Unless in enterprise projects that special processes are needed on each add or update. For example setting last update time in a specific field. Adding extra repository on DbContext/DbSet makes it harder if you want to update just some fields of a record.

 

3- Unit of work

Unit of work is usually not a tough problem. You simply call SaveChanges() of DbContext on Dispose method of the controller. This provides automatic unit of work to all actions. But wait, there is a special case that this is not a good idea. What if a problem occurs while committing changes to the database?

You will not be aware of that. Worse than it your changes fail to commit to database and the user will not be even aware of it. Because when Dispose method is called, user response has been sent to the user’s machine and it is too late to inform him/her. My idea is not using Dispose method and call SaveChanges() manually on each Controller action so you can detect possible errors and inform the user about it.

 

4- Handling validations in application or in database?

One popular approach to do database validations is to commit data changes into database and see if everything goes well or not. For example putting duplicate values into a unique column does not cause errors in application side, but when it is sent to database it will generate an error complaining about duplicate values.

There 2 approaches here. First, leave it as is and let database to control our business logic for us. This approach almost needs no effort to implement as everything is passed to database side. But it requires accurate model definition as that will be converted to database schema. This approach is dependent on underlying database. If underlying database changes, there is chance that behavior of the system changes. One thing that works on MSSQL 2014 may not work as same as SQLite. This approach also make unit tests hard. As some business logic is not in the code so can be unit tested.

Second approach in not relying on database. All validations and rules are checked in application itself. Here there is no dependency on database and unit test can be applied well but this need extra codes and also have performance penalty as database is hit more than once. At least one time for validations and one time for committing changes to the database. Normally my personal choice is to do validations in application side.

 

5- A class for each service

I has been used to use some big classes for services and use multiple DTO or ViewModel or Model classes to get input from ASP.NET binder from user input. But now I think it is not a good idea. It was causing large service classes that can be considered God classes along with multiple model classes that contains only simple properties and no methods.

Now I prefer to use a separate class for each single service. It contains all model properties and needed code for implementing that service. It is more OOP and it is more manageable. In ASP.NET Core I use IServiceProvider to the class so it can get needed services form ASP.NET Core internal DI. See a sample code:

 

6- Misc points

  • Model’s Id type to be GUID not auto increment integer value. It has better database performance as you do not need to query to database again to get assigned id.

  • Not using interfaces at all. It is not useful.

  • Use bower or similar tools to install client side frameworks.

  • Be careful while using AutMapper. Properties can be simply different in 2 sides and no error is raised when it is.

An outsource experience

Recently I have been working on an outsource/remote project on a software solution consisting of both web and mobile parts. As I am always interested in outsource projects, any experience on it is important for me. So I am trying to document my experience on this project in an almost none technical perspective.

Keep calm and outsource.

  1. In this sample project we had project management but it was not enough. Not complying sprint patterns was a main issue. New issues were adding to current sprints without noticing that this reduces planning effects.
  2. A similar issue was losing focus by working tasks outsides of current sprint.
  3. In some cases, there were tasks which resolved with only 0.5 hour and there were tasks were resolved with about 12 hours. I mean tasks were not broken into same pieces. Some of them were actually more than 1 tasks and some of them could be simply merged with other tasks.
  4. Too many changes in requirements really caused delays and was hard to apply. I know in agile environments, changes are normal, but I mean too many changes. Consider too many changes in database table structures that caused many other changes in back-end and even APIs.
  5. This project had enough amounts of documents from first day. But in some parts this was not clear enough. Documentation problem was getting more problematic as project was growing and new people was adding to the project. It was better that question/answers by team members be added to the documentation but it was not. Documentation could be more up-to-date and be more structured.
  6. I forced to implement soft delete in the system tables but after a while I realized that we were not really needing it. All in all that customer wanted was logging record changes. I am not sure that call it bad communication or letting product owner to take technical decisions.
  7. Not using full featured ALM tools was causing negative impact on productivity. Having a tool to publish latest versions automatically based on each git push could help us having more up-to-date test server.
  8. Not all team members were comfortable with written culture of remote working environments. In a team spread in several cities, it was important that any activity logged in Jira, Slack, Emails, … Any member should be able to have information about other works and tasks. This is more important when team members common time are not very large.
  9. As a team that works in different time zones with different work schedules we had a bad problem that was long wait time between actions. Member A books a bug in bug tracking system, hours or even 1 day later member B want to resolve the bug but need more info, he adds comments but member B see it 1 day later and it is going on. You see resolving a bug can take several days. One of members A or B could have solved this bug with higher problem solving skills. Member A could be able to understand the possible bug by trying more inputs and putting system in more states. And member B could be more successful by thinking on the behalf of A and try to solve the issue with less round trips.
  10. Team organization was not so ideal. Breaking team to web part and mobile part caused tracking issues harder. We could be more agile if 2 parts was able to run other parts code by themselves. But when in a small team, works are passed via test servers not in source code, then more time is needed to test an even small task.

 

I believe that this kind of issues have almost 3 roots. Cultural differences that forces us to have different impressions of team roles. For example, from scrum master or back-end developer. Another root is not putting enough time for controlling the team and take actions on weaknesses. And last one in my point of view is that the team has not worked with each other before this. A team needs time to reach its full power. Team members need time to get acquainted which each other.

 

For a technical review on this project, see here and here.

Creating a framework to be used as a base for many other applications

Oh, interesting, received another retrospective just few days after last one. We have developed a base ASP.NET WebFrom framework named ABC and then developed a series of web applications based on it on 2010/2011. One of them is DEF that is currently (late 2016) in production heavily and I guess will be in production at least till 2021 or even 2025 (no retirement plan yet). DEF is dealing with a database more than 10,000,000 records and is used country wide as a national project. DEF is encountering performance problems and is under constant but small changes of the client.

 

Many companies specially those that are not very technical at managerial level love to write code once but use it in many projects more than once. This is why many IT companies have internal frameworks and many people like myself are searching if it is a good idea at all and if yes, what's the best platform to do that. Creating ABC and making DEF and other applications based on it is a good sample for this approach. I'm gonna to review its real weaknesses as it has been in production for few years now.

 

Large Data

ABC has been used as base project of many applications but no one were dealing with large amount of database records as DEF. At the other hand ABC was not designed to deal with that large amount of data. So DEF has performance issues more than other ABC based projects. Performance issues in turn causing other issues relate to save some records in situations that system load is high.

 

Upgrade

As ABC is base framework and many applications based on it are in production and some of them have multiple instances, so upgrading ABC is hard. Suppose I want to upgrade a component in ABC to enhance some features, this upgrade may cause problems in others, at least I can not be sure that this upgrade is safe for all applications. In DEF case we needed to upgrade Nhibernate and did it in painful and very lengthy process.

slow
slow

Internal mechanism

Like upgrade problem, we have difficulties in changing internal mechanisms and designs. For example changing transaction management is somehow necessary for DEF but it must be done through ABC. And as others are using ABC too, it is not an easy way, and sometimes it is impossible to be accomplished. As a result DEF is forced to live with problems that we know why exists and if DEF was standalone, how can be corrected.

 

Do everything through application channel

For a small application that can be run from a shared host, it is not a bad idea if every operation is done via web application itself. But in a large application like DEF there are situations where other tools are needed. For example we have batch operations that take likely 30 or 60 minutes to complete. A good tool is to use windows services to do this type of work. But DEF uses ASP.NET and IIS to do batches that is not good. Many application pool restarts occur during batch or lengthy operations. Also they reduce speed of current logged users and decrease IIS resources and possibly cause secondary problems. Another example is handling a database table with large record count. We handled it hardly in the application while a better way was to introduce a secondary database and define jobs to move old records to it so main database remaining lighter.

inflexibility
inflexibility

Creating packed mega APS.NET controls

If you are familiar with ASP.NET WebForms you know that we have plenty of ASP.NET controls available there, like drop-down-box. In ABC we have had created some mega controls like them to do bigger operations. Think that they were similar to Telerik or Dundas controls, but larger and wider. For example a grid that was able to do paging, sorting and searching. In theory they were very useful and time saving but they were tightly coupled with other internal structure of the ABC and was very inflexible.

 

Conclusion

General purpose frameworks are seen very promising while not used, but in production many cavities are explored. They are good when used in short term and very very similar applications. If you want speed and flexibility think more about “creating each application from scratch” strategy.

Review structure of a web application that I’m working on

There is a work-flow in Scrum called retrospective. It is about reviewing work done in a sprint. I love the idea, I think talking and communication in a software development team is very important. Inspired from scrum retrospective I'd like to have a review on architecture and design of a project that recently I've been involved in. The project is not finished yet but I think it is a good time to review its structures.

 

The project back-end is implemented by ASP.NET MVC Core and Entity Framework Core and is serving both web API and server backed contents (ASP.NET MVC). Develop is done mostly in Ubuntu but also in Windows too.

copyright https://hakanforss.wordpress.com/2012/04/25/agile-lego-toyota-kata-an-alternative-to-retrospectives/

Projects Structure

While project is not very big and complex and have about 20 tables on the database we decided to have 4 projects. One for domain, dtos and contracts named Domain, another for mainly business logic called Core, another for web project itself that contains cshtmls, controllers and wwwroot directory called Web and another for unit tests called Test. I do agree this is a very common project structure for ASP.NET web applications but I saw not benefit over them except than a simple categorizing that also was achievable with directories. I think it was better to have 2 projects, one for web (combining domain, core and web) and another for test.

 

Interfaces

Programming against interfaces are very popular in C#. It has gained more popularity with wide usage of dependency injection in ASP.NET. Also ASP.NET Core have a built in dependency injection that has increased popularity programming against interfaces. We followed this manner in our project. We create an interface per each service class. We have no mockups in our unit tests so I think using so many interfaces are a bit over-engineering because we have no interface in our project that has more than one implementation. Having large number of interfaces just decreased development velocity. Adding each new method needed changes in two place, service itself and interface that it was implementing.

 

Soft Delete

Soft delete means not deleting database records physically, but instead keep them in database and set a filed in each record named IsDeleted to true. So in the application soft deleted records are not showed or processed. We add this feature to the application so we can track data changes and do not allow any data being really loosed. For this purpose we could have used logging mechanism in a way if a record is deleted a log is added that says who deleted what data and when. Implementing soft delete imposed many manual data integrity check to the application. With delete of each record we must check if any dependent item exists or not, if so prevent deletion.

 

Authorization

I personally made authorization over-engineering. I checked roles in both controllers and services in many cases. My emphasis on separation of controllers and services was a bit high. No entry to the app exists other than MVC controllers and API controllers (they are same in ASP.NET Core). So checking roles in controllers in enough.

 

Using dtos to access data layer

Many application designs allow a direct access to database models. For example controllers action get a MyModel instance directly and pass it to dbSet or services to save it. It is dangerous because ORM dirty checking mechanism may save them in the database in mistake. In this project I used a Dto to pass data to or get data from CRUD services. So controllers are not aware of database models. It increased volume of development but I think it saves us from mysterious data updates in the database.

Update:

Second part of this post can be found here.

Software component re-use in ASP.NET and Django and is it really beneficial?

‌I am an ASP.NET developer for years. In many companies and projects that I worked for there is constant need for re-usable components. Some companies were completely serious about it so based their business on it. When I'm doing software projects on my own management (as project manager or as freelancer) I encounter this subject again.

 

Realistic or not realistic, it is very favorable if we could use previously developed components in new projects. It would be very time saving specially in repetitive projects. Code re-use exists in different levels of ASP.NET. You can use Html helpers or user controls to share components between projects, you also can use services like logger or authentication in several projects. There is a type of component reuse in ASP.NET that is used by modules like ELMAH that is based on HTTP modules or middle wares. None of them are a component re-use that I need. What I need is a complete set of component re-use. I need all core and UI elements all together. For example in logger example, I need core logic and all needed UI all together in a manner that I can plug into a new application so other components of the application can communicate an integrate with it. I know there is a solution for ASP.NET that is called Area that approximately do what I need. It do its re-use in view (UI) well. I just copy files into its directory. But I it no designed a really separate component. It is forced to be aware of mother application's internal functionality specially on database design. Maybe it is the reason that ASP.NET MVC area is not very popular.

 

I've read a lot about Django that is re-use friendly by design. I see it is based on apps. Also I see that there is an app sharing web site for it. But never used it in a real project.

 

By thinking more and more on software re-use (in the context of web development) I realize that not every component re-use is suitable for the application. There is trade-off here. If you want to have a re-usable app then you have to develop it as generic as you can. That itself is causing complexity and creating bug and even consumes more time for development. When you start using a component among several projects you must carefully think of every change you made in the application. Each change must be confirmed as backward compatible as others are using your app. So maintenance would be hard. Apparently this is the reason many web development teams do not employ re-usable components a lot.

 

There is at least one situation that this model of software re-use makes sense. When you produce a re-usable app for a limited range of projects and limited period of time and when you are intended to use your app only in a family of project, that would better suites. Here it is good that Django applications are developed in this manner by default, whether you wan to re-use it or not.

Finding a good front end solution for a semi single page web application

 I am in middle of decision making about what technique or library should I use in front end for a typical web application. Major section of this application is done with ASP.NET MVC with full back end approach. So very little front end development and Ajax calls exists except for cascading drop downs or implementing auto-complets. Every operations are done via server post backs. When you do a CRUD or other operations your request is sent to server, then the result is rendered in server then returned to client and finally is shown. With this manner, front end can not be complicated very much. Pages can not have too many elements and//or too many operations. For large operations, more than one page is needed. A page for main operation then a page for sub-operation that typically is navigated from a main list page.

 

But the problem began from where that some page are wanted to have more than one operation. For example a page for CRUD some models and some sub-pages in it for complementary operations. Supposing that no post-back is allowed here, we would need front end development here. Interacting with DOM and reading/updating them needs plenty jQuery code and also need few server APIs and ajax calls to them. As much as the pages goes larger and need for interaction with user gets higher, for example getting user's confirmation or opening more dialog boxes, volume and complexity of front end code increases. So need of decreasing complexity and development time arises.

 

Here we have 3 options. First, do not allow much front end development and handle all the application with back end MVC only. Front end pages will be simple this way. Pages can not have more than one operation and every operation will cause a post back. Number of total pages will decrease as each single operation need a separate page.

 

Second we can allow multi-operation pages but use no ajax calls. That means jQuery is used to opening dialog boxes and getting user data but instead of using ajax, the form is posted to the server so a post-back occurs. This technique have inflexibility because it not easy to show dialog boxes or getting confirmations from user. Everything is posted to server, then possible errors are detected then error messages are sent back to the client. Also no state can be maintained. After page is returned from server, active controls or even data that was entered in inputs will be lost. Because of these in-flexibilities this technique is not applicable very much.

 

Third technique is to get help from JavaScript libraries and frameworks developed for this problem. This way all functionality we need for front end, including good user interaction, low code complexity and low implement time (except for learning time and setup time) is reached. Cons is learning time, setup time and overhead that it may produce.

 

If we go for third solution, a good choice is to use MVC/MVVM js frameworks that are mostly used for SPAs. Our goal is also is SPA but only for some sections of the web application not all of them. Famous js frameworks for SPAs are Angular.js and Ember.js. But they are too large for our problem. So a smaller one must be selected from several comparisons including this, this and this. Form them I feel that backbone.js (MVP) and knockout.js (MVVM) are better choices. Backbone.js uses more familiar pattern that is MVP and I read somewhere that knockout.js development is slow and community is getting decreasing. So backbone.js could be final choice.

Update

After advocating wiht my friend I decided to add a 4th solution: doing front end manipulations with pure jQuery/ajax code. This code may be lengthy but have less overhead than employing a SPA framework like angular.js or backbone.js.

Update 2

Shawn Wildermuth also did a comparison recently. Find it here.

Selecting a web framework based on reusability and pluggability of components

There are plenty comparison of web frameworks on the Internet. Many of them compare web frameworks in general, some of them compare web frameworks with performance measure. Some are comparing in learning curve, popularity, architecture, speed of development, etc.

 

But I am interested to focus on reusability and pluggability of components. In a web development team it is good to be able to use previously developed portions of project in new projects. For example many projects have membership or accounting section. They can be developed once and used more than once in different projects. You can even think of ticketing or organizational structure management across different separate web projects. Goal is reduce development effort during development mid level web projects.

Reusability

Django is introducing itself as to be so, but how about Rails, ASP.NET, MEAN or other common web frameworks?

 

Django has administrative CRUD interface that can save much time during development. Dhango's moto is “the web framework for perfectionists with deadlines”. Every application in Django is consisted of apps. Each app can implement an independent field of business. Django claims that you can join different apps to to create a complete web application.

 

Django has good documentations but its learning curve is high. It seems efficient for database based applications.

Django is not fully object oriented. It is not as fast as Node.js but does not force you to build everything from scratch. It also seem to have less batteries than Node.js and Rails. Its job market is even smaller than Rails and ASP.NET.

 

Rails is opinionated, so many settings and conventions are set by default. Rails developers can learn it faster and develop with more rapidly. It is not very good at performance but have strong community. Rails is popular in Mac users while. It also is easier while deploying as cloud solutions have better support of it. Its rapid development can compete with reusablity feature that Django claims.

 

Previously I wrote about the subject here, here, here and here.

 

What is your opinions and experiences?

Using cloud storage services

As a person who is practicing remote working, freelancing and working with distributed teams, using cloud storage services like Google Drive is inevitable. Each service has its own pros and cons especially when you live in place that has serious Internet constraints and international sanctions.

 

Using cloud storage services helps you to be more organized and productive. You want to send a copy of a file to colleagues and after updating it, sending updated copies again? Using emails are tedious and error prone. You can put the file into a cloud storage system and share it with colleagues. So everyone have access to it. If you or your colleagues update the file, so no need to send new file to each other, everyone just get new file automatically. Cloud storage services save history of file for you, so you can see changes over time or download an older version if you want.

 

Cloud storage services provide good ways for notifications to collaborators when someone changes a file or adds new files to a shared space. They also keep track of conflicts and are good for alone people that just want to share their own files among their own devices like PCs, notebooks, tablets and mobile phones.

 

Not using a cloud storage service in a distributed team is in some degree like not using source control systems among developers of a team.

 

Despite all advantages of using cloud storage services, there are also disadvantages. They tend to consume huge amounts of your bandwidth. Some of them are expensive. Using cloud storage services speeds up you get confused with different operating systems, file formats, utilities, etc. You are using MS Office while others put Libre Office file formats into the shared space. You can edit a specific file in your PC but your tablet does not have an editor for it. Your MacBook machine works well with a cloud storage system while there is no suitable cloud storage client for your tablet. And don't forget that security is a big concern.

cloud storage
cloud storage

For many users specially those who has Android devices, default selection is Google Drive. It has online editors and have good integration with Android devices but its big problem is US sanctions. Due to sanctions, its desktop client for windows can not be downloaded from sanctioned countries. I guess if you download it with some work-arounds there would be still restrictions using it because APIs are still on the Internet and you must hope they are not restricted for sanctioned countries.

 

Another popular choice is Microsoft OneDrive. It has excellent integration with Micorosoft Office Online. Its services are many less expensive than other services. But there are some problems with it. It has no official client for Linux. There is an un-official project for it called onedrive-d, but it is not working very well. OneDrive stopped its service to sanctioned countries from October 2016.

 

For Linux users residing in sanctioned countries, DropBox is a good choice. It is accessible from these countries, it is easily installed on Linux and even works good with Android. Though it is an expensive service. It also have Office Online for editing files online.

 

If you are a Linux user using LibreOffice and wants to be able edit your files in Android and in online DropBox be careful. There is no Android version and no Online version of LibreOffice. You have to save your files in MS Office format and use MS Office for Android and MS Office Online.