Visual Studio Compatibility

If you’re like me and spend most of your time in Visual Studio, being kicked out to open a Work Item can be an unwelcome window popup and waiting for something to load (that doesn’t need to).

Translation: When I’m in Visual Studio, working in Visual Studio, I don’t need my bugs to open in a browser.

Thankfully this is easy to change.

vs

Write Automated Code

It doesn’t matter what tool you use for testing your software, the question to you one day will always be the same.

“Can we automate it?”

Can we take you out of the mix and run it on it’s own?

Can we run it across different tenants concurrently with it “crossing the streams”?

Can we send it a 1000x simultaneous requests to see how it does?

Think about the code you’ve written over the last few months – would any of it satisfy these three tests for code that can be automated?

It’s not easy and generally involves an extra amount of testing and development in understanding these scenarios and applying them to your current project set.

But that’s where you shine right?

That’s where you take the tasks that people grind on, you fix them, you automate them, you save time and money and you get back to doing the work that matters right?

Despite all that hubris it is not always that easy and often times it is much harder to convince your Project Lead why you need to understake these tasks for something that might not be requested for another 6 or 12 months.

And this is true.

And the counter to this argument is that if you build it today, that means you can start automating those tests today, you can start updating multiple environments and topologies today, you can start testing against a higher degree of performance thresholds that you are receiving today.

And that’s where the value and the need for this comes from. Because in the moment where you are going to really, really need this type of architecture and design to be in place in your code, you are not going to have the time to wait 2 – 3 weeks for it to be ready.

You don’t have to tackle it all in one fell swoop.

Start small, pick a component that could value most from these capabilities that you are currently work on and build in that functionality. Then over time, keep building in a little more here and a little more there.

And when the day comes that you get asked those three questions, your answer will be –  “Yes, simply turn it on here and you are good to go”.

And that is the answer that every worried Customer Support Technician wants, Stressed Out QA Tester and hopeful Sales Engineer wants to hear to know that your code is ready for the big leagues.

Now let’s go and do it.

Creating a Custom Interactive button in Dynamics

It’s been awhile since I played with the Ribbon Workbench and I had to re-familiarize myself with it to deploy some custom button functionality to a Dynamics tenant.

If you’re not familiar with the Ribbon Workbench, go download it and bask in it’s glory and time saving capabilities.

Once you install the solution into your Dynamics system, usage is as simple as selecting the solution you want your new button to be deployed to, dragging a button onto the Form toolbar and creating a command object that calls a function in your specified JavaScript file.

Creating the Button

As you can see from the screenshot, my function is called SendMail and called from a library within the provided file.  When I first started coding this button, I added a simple alert() to the initial function call so I could quickly validate the buttons functionality, deploy it and move on to the rest of the implementation.

Capture

There are a host of other options in creating a button related to display rules and Hide actions which can make your implementation that much more dynamic.

What I really like about the Ribbon Workbench is that the customizations are deployed directly to your solution without having to deploy the workbench solution between environments.

No external Dependencies = awesome development!

User Interaction

In my scenario, the button that I created was calling an action where the results were passed back to my calling function – for better or for worse.

Adding to my integration, I sent a notification back to the client when the action had completed.  If there was an error, the error was sent to the client.

For an Informative warning this looked like.

Xrm.Page.ui.setFormNotification("Authorization successfully sent.", "INFORMATION");

And in the case of an error.

 function (e) {
 // Error
 console.log("Workflow Err: " + e);
Xrm.Page.ui.setFormNotification("Could not Authorize, Error: " + e, "ERROR");
 }

When both solutions are thrown together, a great integration story for facilitating calls to a custom service, action or workflow and updating users on the status of those calls when completed.

Improve Query Performance to Oracle

I recently had an issue where we were migrating a large Oracle database into Dynamics which required a significant amount of lookups back to Oracle for synchronization keys between both systems.

When we moved the system between different database environments we started to see the following errors.

“ORA-12801: error signaled in parallel query server P001\nORA-12853: insufficient memory for PX buffers: current 1632K, max needed 80640K\nORA-04031: unable to allocate 65560 bytes of shared memory (\”large pool\”,\”unknown object\”,\”large pool\”,\”PX msg pool\”)”  

As a developer, I get very worried when code changes are required between environments when all other variables stay the same (i.e., database, code, etc).  In this case however, we had been lucky that we had not run into this problem in DEV.

Where I was dynamically constructing the query on the fly, Oracle saw this as a new query being built every time (despite the only thing changing was the value in the WHERE Clause).  On their own these queries were fine, but when running about 50,000+ lead to some issues.

To get around the above error we leveraged the OracleParameter syntax as follows.

OracleCommand oraCommand = new OracleCommand("SELECT user FROM test.USER_LOOKUP WHERE user = :userName", db);
oraCommand.Parameters.Add(new OracleParameter("userName", userId));

Once implemented, we noticed a huge shift in performance and no more parallel query errors.  We had a lot of classes to change but were able to implement the change in a little under a day.

Migrating Users in Skype For Business

I recently had to migrate a ton of users from one Skype For Business Pool to a new one.

Step 1: Get the Users

To get started we wanted to to see what we were working with so I wrote the following Powershell script that output a CSV file with all of our users that we wanted to migrate in it.

$dayStart = get-date
$dayEnd = $dayStart.AddDays(-300)
get-aduser -Filter ‘Enabled -eq $true -and lastlogondate -gt $dayEnd -and  UserPrincipalName -notlike “system-*” -and mail -ne “$null”‘ -properties *  | Select-Object DisplayName, SAMAccountName, Department, SipAddress, Enabled, LastLogonDate, msRTCSIP-UserEnabled, msRTCSIP-PrimaryUserAddress | Export-CSV c:\Existing_Lync_Users.csv

You can add as many parameters as you want to the filter and there is some interesting information here on how to structure your WHERE clauses.  For me I wanted to make sure the user was enabled, had logged in at least once in the past 300 days and the account was not an admin type account (note the above ‘system-‘ filter is made up and not the real one).

From there, this exported all the information into a handy, dandy CSV file that I could filter as I saw fit.

Step 2: Move the Users

Import-Csv c:\Existing_Lync_Users.csv | ForEach-Object {Move-CsUser -Identity $_.UserPrincipalName -Target “new-pool-name” -Confirm:$False}

If I was using the outputted CSV file at face value, I could run the following to migrate all our users, but you can take your original CSV and filter by department or some other value to minimize the resultset.  For each row that was found, I executed the Move-CsUser comand to the new target pool and disabled the confirmation so it would not prompt me for each user.

 

 

UCMA Toast Messages and Conversation Subjects

When sending an Instant Message through UCMA you have a few options available to you that simplify the setting of a conversation’s subject.  This can be useful for a user who might have a myriad of ongoing conversations and needs to know the content of what is incoming and what is ongoing.

Enter the Toast Message and the Conversation Subject.

The Toast MessageThe Toast Message is the alert you receive in the bottom right corner of your screen that alerts you to an incoming IM.  This message is fully customizable but is only instantiated when the call is established.  In this example, I have a variable called _conversationSubject which I pass into my BeginEstablish method that sets the value of the ToastMessage for the conversation.

Toast Message

_instantMessagingCall.BeginEstablish(DestinationUri, new ToastMessage(_conversationSubject), null, CallEstablishCompleted, _instantMessagingCall);

After the IM has been opened, the ToastMessage subject is no longer applied to the conversation and the conversation defaults to the SFB default subject (name of the person you are chatting with).

Conversation Settings

The Conversation settings are different in nature as they are set BEFORE the InstantMessagingCall is created (i.e., before a Toast Message is established).  The implementation is again very simple where I instantiate the ConversationSettings object, add in my subject and then proceed to initialize the call.

ConversationSettings convSettings = new ConversationSettings(); 
convSettings.Id = Guid.NewGuid().ToString(); 
convSettings.Subject = _conversationSubject; 
Conversation conversation = new Conversation(transferee, convSettings); _instantMessagingCall = new InstantMessagingCall(conversation);

Leveraging both the ConversationSettings and ToastMessage classes, give you the power and flexibility to ensure that you are either delivering a consistent user implementation and/or a more targeted experience based on the severity of call that is coming in.

Although this might seem unnecessary for a User to User Instant Message Conversation, where it really starts to shine is when an agent is handling multiple IM conversations that interact with the same Application Endpoint.

Raising The Bug Bar

In our quest to find the latest, greatest and bestest methodologies out there to ship great software we often overlook the simplest of implementations to get a project going – The Bug Bar.

As much as I wish this was an actual bar a la Bugs, it’s not.

bugs.jpg

The Bug Bar is a simple tool used to keep your team’s head above water when shipping copious amounts of software against an unpredictable schedule.

How it Works

Before each iteration set a maximum number of bugs that can be reported that cannot be triaged into a subsequent iteration based on their priority and severity to the project.

There is no discerning between bugs raised by Developers, QA, End Users or your mother – they are all created and treated as equal.

When that number is hit during the iteration, all feature and task development work is halted until the bar drops down to an acceptable level to then return to feature and task development.

What it Does

Ensures the team is focused on not rushing task and feature development by introducing bugs in the software that were previously not there but aren’t being worked on in the current iteration.

Ensures that the entire team (for business to project to development) are on the same page with this level of importance and know how to react accordingly when this happens.

Ensures your Project Manager is monitoring the bug lists and actively triaging what does and doesn’t apply (taking this load off of Developers).

Sets the expectation that the content is greater than the date.

It’s not a complicated concept, it’s downright simple, but sometimes that is where you need to start to see a change in the delivery of your software.