Write Automated Code

It doesn’t matter what tool you use for testing your software, the question to you one day will always be the same.

“Can we automate it?”

Can we take you out of the mix and run it on it’s own?

Can we run it across different tenants concurrently with it “crossing the streams”?

Can we send it a 1000x simultaneous requests to see how it does?

Think about the code you’ve written over the last few months – would any of it satisfy these three tests for code that can be automated?

It’s not easy and generally involves an extra amount of testing and development in understanding these scenarios and applying them to your current project set.

But that’s where you shine right?

That’s where you take the tasks that people grind on, you fix them, you automate them, you save time and money and you get back to doing the work that matters right?

Despite all that hubris it is not always that easy and often times it is much harder to convince your Project Lead why you need to understake these tasks for something that might not be requested for another 6 or 12 months.

And this is true.

And the counter to this argument is that if you build it today, that means you can start automating those tests today, you can start updating multiple environments and topologies today, you can start testing against a higher degree of performance thresholds that you are receiving today.

And that’s where the value and the need for this comes from. Because in the moment where you are going to really, really need this type of architecture and design to be in place in your code, you are not going to have the time to wait 2 – 3 weeks for it to be ready.

You don’t have to tackle it all in one fell swoop.

Start small, pick a component that could value most from these capabilities that you are currently work on and build in that functionality. Then over time, keep building in a little more here and a little more there.

And when the day comes that you get asked those three questions, your answer will be –  “Yes, simply turn it on here and you are good to go”.

And that is the answer that every worried Customer Support Technician wants, Stressed Out QA Tester and hopeful Sales Engineer wants to hear to know that your code is ready for the big leagues.

Now let’s go and do it.

Raising The Bug Bar

In our quest to find the latest, greatest and bestest methodologies out there to ship great software we often overlook the simplest of implementations to get a project going – The Bug Bar.

As much as I wish this was an actual bar a la Bugs, it’s not.

bugs.jpg

The Bug Bar is a simple tool used to keep your team’s head above water when shipping copious amounts of software against an unpredictable schedule.

How it Works

Before each iteration set a maximum number of bugs that can be reported that cannot be triaged into a subsequent iteration based on their priority and severity to the project.

There is no discerning between bugs raised by Developers, QA, End Users or your mother – they are all created and treated as equal.

When that number is hit during the iteration, all feature and task development work is halted until the bar drops down to an acceptable level to then return to feature and task development.

What it Does

Ensures the team is focused on not rushing task and feature development by introducing bugs in the software that were previously not there but aren’t being worked on in the current iteration.

Ensures that the entire team (for business to project to development) are on the same page with this level of importance and know how to react accordingly when this happens.

Ensures your Project Manager is monitoring the bug lists and actively triaging what does and doesn’t apply (taking this load off of Developers).

Sets the expectation that the content is greater than the date.

It’s not a complicated concept, it’s downright simple, but sometimes that is where you need to start to see a change in the delivery of your software.

Great Software NEEDS Requirements

I think it would be pretty cool to have had some IoT on my keyboard for all the code I have written to tell me how many lines I have written over the years.

But I would love to cross-reference that statistic with how much code I have rewritten based on poor requirements.

How much code I deleted?

How much code I had to update?

How much time was loss?

I generally try to keep to code on this blog but at the core of every great release are the requirements that are built at the beginning, middle and end.  Do a bad job on those and I can guarantee what the end result will be.

When jumping onto a new project or product, the first thing I always do is sit down with the user and the person writing the requirements so I can see how close they are to understanding each other’s visions for what the end product will be.

If they are closely aligned, I know we’ll be in good shape, if they are far apart, I know I have to plan for some additional work on my side to bring them closer together.

As much as developers would like to ignore this part of the process, they can’t.  Well-written requirements are the first step in a successful software delivery.

The developer’s role might not be to write the requirements, but their role is definitely to ensure that everyone has a complete and thorough understanding of the problem everyone is trying to solve as well as ensuring that the requirements stay true to that focus.

I published a Presentation on SlideShare a few years ago on How to Write Great Requirements, the content is as relevant today as it was then.

 

A Scaled out CRM Solution Architecture

Recently I started work on a pretty big CRM project and I wanted to apply a more scaled out approach to my solution architecture.  CRM offers a great facility to deploy code in their solutions but when starting a new project you should always ask yourself the following questions before you starting adding entities into your solutions.

  1. What is the frequency of updates that will be requested by the users and to what components?
  2. Are there multiple users contributing to this project?
  3. How big do you expect this project to grow by?
  4. What kind of promotional model is in place for deployments?

I  have found that questions such as these generally drive the overall solution architecture I will put in place on a project.  For instance, if we are working with a client that has a Development to Production promotion model with only a developer or two on the project, I’ll suggest deploying as one solution.  However, if the project is quite large, has multiple developers on it coupled with multiple deployment stages (which invariably means a more formalized testing process) I’ll tend to go with a more scaled out architecture.

For this current project I went with a component based solution architecture broken out as follows;

  1. Entities – this contains all my core entities, client extensions and web resources.
  2. Security – this contains all my security roles.
  3. Reports – this contains all my custom reporting files.
  4. Workflows – this contains all custom workflows and plugin functionality.

The reason for this approach is to reduce the load on QA and allow the team to install what they need without fear of interfering with the work of others.

sol

Some scenario examples where this should help the team;

  1. When a developer makes a change to only 1 of the solutions, QA can rest easy deploying that single solution (instead of all 4) and not having to regression test all 4 but only 1 solution.
  2. Reports will most likely be handled by developers that are very familiar with developing custom RDL files, as such, they don’t need references to any of the underlying entities and based on user feedback, they will be able to deploy on their own schedule, outside of the core application.
  3. Having security in it’s own solution now opens this up to being managed by non-developer groups (huge).  Although I don’t recommend deploying an unmanaged solution to Production, this solution can actually start it’s “development” or “work” in a TEST or UAT where Business Analysts can create the security roles they think make sense against the new components built by the developer teams without worrying about interfering with any previous development to date.

These are only a few scenarios where I see this architecture helping out in the long-run.  I have gone through a couple of iterations of scaled out architectures from feature to product based but so far this one represents a consistent approach that can be replicated across projects.

My next step will be to minimize any complexity associated with deploying updates to different environments by writing a little component to deploy the correct solution in the correct order.  However as it stands right now, the solutions themselves are quite loosely coupled and can be deployed without error (i.e., reports) without triggering a solution import error.

Using Dynamics365 as a Queue for Data Synchronization

Over the years, I’ve migrated a lot of data from on-premise systems into Dynamics365 (whether they be existing CRM system, homegrown solutions or off the shelf packages).  I’ve used a number of third-party tools to accomplish these tasks (Scribe and Kingsway) but have also written my own when the need arose.

On a recent project, faced with yet more synchronization requests and the need for more infrastructure to manage changes, mediate conflicts, prevent ping-ponging data writes, etc, etc.  I started to change my thinking from being able to have everything on-premise (i.e., the ability to queue up new Virtual Images and tons of server space et al) to think of how I solve this problem if all I had was Dynamics365 and the server I am moving data from.

To start with – how could I keep up-to-date with all the changes happening in Dynamics and queue them up for later retrieval by some other system.

My first thought was an async workflow to do the job but this raised up a few other requirements in doing this;

  1. Administrators should be able to associate this workflow to any new entity they want synced with super ease (i.e., create workflow, finish).
  2. The code for the workflow should not need to be modified at all and should dynamically figure out the entity and primary key attribute that I need to later retrieve to be synced.
  3. Code should be small.

So here is what I wrote as a workflow and then deployed to my online tenant.  The solution is really tiny, I created an entity called syn_dataeventqueue, which contains all the synchronization entries.

I did some tests between custom and core entities and was able to detect the proper change events coming in and for the correct entities.  You can see the initial state is “Not Processed” – I created some custom states for when I pull the requests to not pull again if the syncing period went longer than expected but that’s for another post – here is the code.

 protected override void Execute(CodeActivityContext Execution)
 {
 //Get the Tracing Service
 ITracingService tracingService = Execution.GetExtension<ITracingService>();

 //get context
 IWorkflowContext context = Execution.GetExtension<IWorkflowContext>();
 //create iorganization service object
 IOrganizationServiceFactory serviceFactory = Execution.GetExtension<IOrganizationServiceFactory>();
 IOrganizationService service = serviceFactory.CreateOrganizationService(context.InitiatingUserId);

 //Now we need to query the entity for their primary id.
 RetrieveEntityRequest request = new RetrieveEntityRequest();
 request.EntityFilters = EntityFilters.Attributes;
 request.LogicalName = context.PrimaryEntityName;

 RetrieveEntityResponse response = (RetrieveEntityResponse)service.Execute(request);
 AttributeMetadata PrimaryAttribute = response.EntityMetadata.Attributes.Where(a => a.IsPrimaryId == true && a.AttributeType == AttributeTypeCode.Uniqueidentifier && a.ColumnNumber == 1).FirstOrDefault();
 string AttributeName = PrimaryAttribute.SchemaName.ToLower();

 Entity EntityToSync = (Entity)service.Retrieve(context.PrimaryEntityName, context.PrimaryEntityId, new ColumnSet(AttributeName));

 tracingService.Trace("Trace| Record to Synchronize: Entity [{0}], Id [{1}].", context.PrimaryEntityId.ToString(), context.PrimaryEntityName.ToString());
 

 if (EntityToSync.Contains(AttributeName))
 {
 try
 {
 Entity SyncroEntity = new Entity("syn_dataeventqueue");
 SyncroEntity["syn_name"] = context.PrimaryEntityName;
 SyncroEntity["syn_entityrecordid"] = context.PrimaryEntityId.ToString();
 service.Create(SyncroEntity);
 }
 catch (Exception ex)
 {
 tracingService.Trace("Error| Synchronization Submission Exception: {0}", ex.ToString());
 }
 }

The only piece you are probably wonder about is the search for ColumnNumber to be 1.  In my tests, this is always the primary id field, when I tried searching for simply the IsPrimaryId it brought back results for the primary ids of related entities so that didn’t work.

Here is how things look in the workflow creation itself.

wf.PNG

One step!

And what does it look like in Dynamics?

grid

Beautiful – now to finish the rest of it.

 

Setting up Dynamics365 for the First Time

I recently went down the path of purchasing a Microsoft Action Pack so immediately went to setting it up in my pre-existing Office365 Tenant and ran into a few gotchas that might save you some time.

Office365 Business Essentials and Enterprise Plan 1 Licensing

I currently have an Office365 Business Essentials plan but when creating my Dynamics tenant quick realized that my users that were licensed under my Business Essentials license cannot be readily imported into Dynamics365 due to SharePoint plan conflicts.

There is a path to upgrade users through the Office365 Admin, but that’s for another blog.

Setting up Dynamics365

When creating your tenant for Dynamics, it’s not solely about your licenses, but also about creating the actual tenant.  To do this, you need to first navigate down to the Settings section of your Office365 Admin and select “Services & Add-Ins”.  From there you will be presented with the window below where you now create your new tenant simply by clicking “Manager your Dynamics 365 settings”.

DynamicsUpgrade

From there you will be prompted to go through the same steps that you generally go through when deploying a tenant (perhaps with a different User Interface).

DynamicsUpgrade2

In the above screenshot, I have configured my first tenant to be a Sandbox where I can do my development and mess around with things.  Sandboxes were always there before but are now included with every plan, irrespective of your plan, granting you one free sandbox per client.

DynamicsUpgrade3

As you can see in the above screenshot, my tenant is now listed as a Sandbox and not Production.

Now we can get to doing some coding.