SSDTPokedex: Integrating Slack and VSTS Into GitHub Repo - An Infinite Improvement
One of the home projects I’m currently working on is migrating a database over from SQLite to SQL Server. There’s several tasks that need to be accomplished before we can say that this is successful. Broadly speaking they fit into the key pillars of successful software development:
So the development part is well under way: there is a repo in GitHub of an SSDT solution that will compile locally (it works on my machine anyway.) Plus I have a couple of releases: one intentionally broken and one fixed. Now there’s plenty of tasks I need to get on with the code: initially none of the tables have named primary or foreign keys for example, but now most have been named. And all of the static data is loaded either by post-deploy scripts or a post-post-deploy script. I’m making good progress, but there is still much to do. However before I press on with this, I need to get the other pillars in order.
For example, the primary/foreign key refactoring was a work item that I needed to stick in a backlog. And though I’m not deploying the database in anywhere near production yet, I am creating releases intended to be deployed. So clearly I need some kind of Backlog and Build solution.
GitHub itself offers Travis CI, but unfortunately it does not support SQL Server/SSDT. That’s a shame because that would have integrated nicely into my pipeline… by this I mean I want to keep things simple because I’m a simple guy and would only have to use the one tool. So the next option is VSTS. I’m really pleased to say that VSTS integrates seamlessly with GitHub: there’s no issue in creating a build and setting the Repository to the SSDT.Pokedex repo. By following the step by step help online I was able to connect my repo to a VSTS Build Definition.
With regards to the build itself within VSTS, when you create a new build there are defaults setup that I was able to use with no extra config required on my part. This is in part because we are building one solution, there are no tests to run or NuGet packages to download, nothing to package up etc. So it’s quite basic build. Nevertheless, what I’m trying to achieve here is some sort of order, some sort of quality across all the pillars so that in the future I can come back to this and build upon this (eg, I may want to implement the headless SSDT nuget package, or wish to package and publish the builds as a nightly release….. or whatever.) And the important thing to remember is that all the scripts that are run as part of the deployment are validated because of this build. So for each check-in the entire project is built, ensuring some level of quality.
Let’s return back to the pillars for the moment. The fact of the matter of the pillars is this: when it comes to software development, the only thing you need to do is write code. CI/CD is not required., nor is testing. Al that is needed is the code. But there’s no point in building out all the code and not bothering with the planning, or automating build/deploy, or monitoring the application to ensure that it even works! Because no matter how good the code is believed to be, if you’re not constantly building and deploying and running tests there’s no way you can honestly say how good your application is until you release it, and by then it’s usually too late to do anything about it.
But I digress…
So we have a basic build set up that will compile the solution, and on “Triggers” I have ticked “Continuous Integration” to ensure each push to the central repo kicks off a build.
So now I have a project in VSTS and a CI Build I can focus on getting some work items in the backlog and start prioritising them. I’m not going to document what I wrote in the backlog here because future posts will invariably cover the tasks I have completed. But here is a screenshot to prove that there is indeed a backlog of sorts. One that has yet to be prioritised and detail truly gone into, but one nonetheless.
Top priority is to get the DACPAC deploying post-build, so I have to spend a bit of time with that before I push on with re-factoring. Even if it on the hosted build agent localDB instance, that will be better than nothing. The primary requirement is to test that the DACPAC actually deploys. I’ll fully admit that in the pipeline that the deployment process is lacking and I need to go spend some time on getting the story here right and at the same competency as the other pillars.
Finally, monitoring. Currently all I have is a CI Build, so it would be useful to get feedback on when that completes and what the status is. Again, VSTS really excels here by integrating with Slack, and it took no time at all to create a Slack team and a channel called VSTS. Then it was a case again of following the ridiculously simple help to integrate Slack with VSTS.
So let’s test this: I’m going to check in a change that I have made to a table (I am beginning to gives names to the primary/foreign keys in earnest.)
Now let’s check VSTS:
Excellent, it’s kicked off a build automatically. And if I check Slack…..
we can see that a message was posted to the VSTS channel I created.
The GitHub project itself has two branches: one dev and one master. All work is done on the dev branch, and when a workspace is sync’d to the dev branch on GitHub a build kicks off. If it goes green, and only if it goes green, and a code review is submitted and the code accepted, can a pull request into master take place. Master is where all the releases are taken from.
I just want to go back a bit, because I glossed over something super-useful in GitHub: you can actually create Status Checks to ensure that certain criteria are met before you create a pull request to a protected branch. Typically with projects my philosophy is just the one branch, but this one has a dev branch because I’m working with other people remotely, and so being the autonomous kind of guy I am I wanted a barrier between what is in dev and what is in master: any mess from people checking in has to be fixed and sorted in dev before it is merged to master. I should also add that thee is a CI build for master that has the same level of alerts in the Slack channel.
Below is the requirements for merging into master.
What all of this really means is that in two evenings I have managed to
- Migrate the schema out of a SQLite database into an SSDT solution
- Resolve the errors so that it complied
- Create a GitHub repo and pushed the source code
- Create a backlog
- Create a CI Build
- Create alerts when build completes
- Create branches for dev and releases
Yes, the build is rudimentary in that it doesn’t deploy nor run tests (that haven’t even been written yet!). And yes the feedback is only on the build completing, which however does allow for quality control in pull requests. And yes there is much more refactoring to do. And yes the backlog is light on detail and heavy on assumptions right now. But what I have here is the fundamentals of the Pillars I mentioned earlier:
Rather than just push on with re-writing the database and having it sit on my hard drive. Going from zero builds/work items/alerts to 1 of each is an infinite improvement.
Once you have even a tiny stub of a pipeline, you have something to build on, and the next pipeline will be a little easier to write. You might even find yourself in the middle of adding a feature and stop to consider how this fits into your pipeline and what changes you need to make to that.