republish old tech posts with added tag

This commit is contained in:
Wouter Groeneveld 2018-02-26 11:54:49 +01:00
parent 2dc0bd73bc
commit 76410531f6
13 changed files with 1218 additions and 0 deletions

View File

@ -0,0 +1,177 @@
---
title: Enhancing the builder pattern with closures
date: '2013-11-14'
bigimg: /img/Enhancing the builder pattern with closures.jpg
subtitle: the trainwreck/builder/chaining pattern can be dangerous and here's why
tags:
- tech
- closures
- groovy
- 'C#'
- javascript
- java
- functional programming
published: true
---
This post is inspired by Venkat Subramaniam's [Devoxx 2013 talk Thinking Functional Style](http://www.devoxx.be/dv13-venkat-subramaniam.html). See downloads at [agiledeveloper.com](http://www.agiledeveloper.com/downloads.html) which has a rather cool Groovy example.
### Classic builders
For years, I've been using the builder pattern to quickly create new objects to be inserted into the database or to inject our domain objects with the required data. We started with so called "Object Mothers", static methods which simply create and fill up an object, passing in a huge amount of parameters. That quickly became very cumbersome to work with. Most of the time, the code will look like this, whether it's C# or Java doesn't really matter:
public class UserBuilder
{
private UserType_V1_0 type = UserType_V1_0.Administrator;
private string code = "code";
public User_V1_0 Build()
{
User_V1_0 user = new User_V1_0(code, "name", type, "id", "campusId", true);
return user;
}
public UserBuilder WithCode(string code)
{
this.code = code;
return this;
}
public UserBuilder WithType(UserType_V1_0 type)
{
this.type = type;
return this;
}
}
Used this way:
var user = new UserBuilder()
.withCode("AB")
.Build();
Okay, what's happening here?
- Builder objects have `withX()` methods, returning `this` to be able to chain, to fill up every required variable
- default values are provided, so we're not obliged to call every method if we're only interested in one field.
- At the end of the chain, we call `Build()`, which returns our object.
### Enhanced builders
I've never given it much thought, but yes, there are some problems with this implementation (as with everything). The most important one being, can you reuse your instantiated builder? No? Yes? We never assign it, but we **could** if we really wanted to. Since we're **mutating the builder**, you are definatly getting into trouble.
Using a lambda to pass in the work on our builder might solve this:
public class UserBuilder
{
private UserType_V1_0 type = UserType_V1_0.Administrator;
private string code = "code";
private UserBuilder()
{
}
private User_V1_0 Build()
{
return new User_V1_0(code, "name", type, "id", "campusId", true);
}
public static User_V1_0 Build(Func<UserBuilder, UserBuilder> block)
{
var builder = new UserBuilder();
block(builder);
return builder.Build();
}
public UserBuilder WithCode(string code)
{
this.code = code;
return this;
}
public UserBuilder WithType(UserType_V1_0 type)
{
this.type = type;
return this;
}
}
Used this way:
var user = UserBuilder.Build(_ =>
_.WithCode("AB")
.withType(UserType_V1_0.NursingStaff));
Notice that using the character `_` is a convention if there's only one parameter for the lambda, it could also be called "builder" but we still need to use this, as `block(builder)` passes in the temp created builder. What did we solve?
- The actual builder instance is bound within the `Build()` scope. You'll never be able to assign it when using the static method.
- One might say, we reduced some redundancy in the implementation by eliminating the need to call the final `Build()` method, but it's simply being moved.
### Supercharged builders
In Groovy (the devoxx example), we can cleverly use the `.delegate` mechanism to eliminate the need to chain at all. Groovy also reduces the syntax noise a bit (brackets, semicolons). We could create a `Build` method like this:
public static User_V1_0 Build(block) {
new UserBuilder().with block;
// does the same as cloning the block, assigning it with .delegate and executing it.
}
Used this way:
UserBuilder.Build {
Code "AB" // Same as Code("AB");
Type UserType_V1_0.NursingStaff
}
How does this work?
- The `Code()` method does not exist in our block closure, but we assign a delegate to it: our temp lexically scoped `UserBuilder` instance - that's where the method lives. When the code is executed, Groovy first looks for a method within the block, and then tries to fetch it via the delegate.
For more information on groovy delegates, see the [Groovy documentation: Delegation Pattern](http://groovy.codehaus.org/Delegation+Pattern). This works thanks to the late binding of the language and won't statically typed languages such as C#. You might be able to come close using `LINQ` expression trees, but that requires a lot of effort to write a simple DSL.
### Leveraging this principle to DSLs
In Javascript, you can also manage to do something like that using `.prototype` and [prototypal inheritance](http://brainbaking.com/wiki/code/javascript/inheritance) and `apply()` to dynamically bind the `this` context (see [Function.prototype.apply MDN](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Function/apply)).
Of course, builders are completely redundant in JS. Just create a `JSON` object using `{ key: value }`. Done. But this principle might be interesting for things like creating a "mailer" - as in the devoxx 2013 example:
var mailerPrototype = {
from: function() { console.log("from"); },
to: function() { console.log("to"); },
sub: function() { console.log("sub"); },
body: function() { console.log("body"); },
send: function() { console.log("sending..."); }
};
var mailer = function() {};
mailer.mail = function(block) {
// .prototype magic happens inside Object.create()
block.apply(Object.create(mailerPrototype));
}
// this still sucks, I don't want to use 'this.', can use chaining...
mailer.mail(function() {
this.from("me@gmail.com");
this.to("you@gmail.com");
this.sub("this is my subject");
this.body("hello");
this.send();
});
You'll still need `this.`, sadly. This is not needed in Groovy:
mailer.mail {
from "me@gmail.com"
to "you@gmail.com"
sub "this is my subject"
body "hello"
send()
}
Now **that** looks readable. To be able to create something like that, a language has to:
- have functions as first-class citizens.
- have a clean syntax, to be able to reduce a lot of noise (CoffeeScript can get this done for JS for instance)
- have late binding or duck typing
That said, going back to Java 7 is going to be a major pain in the ass. No, I do not want to create usesless interfaces! (Tip: use `Function` and `Predicate` from [Google Guava](https://code.google.com/p/guava-libraries/)).

View File

@ -0,0 +1,92 @@
---
title: Custom Webdriver Page Factories
bigimg: /img/Custom Webdriver Page Factories.jpg
date: '2014-09-22'
subtitle: Wrapping WebElements to reduce boilerplate clutter
tags: ['tech', 'unit testing', 'java', 'C#', 'webdriver', 'scenario testing' ]
---
The problem: Webdriver elements returned by `driver.FindElement()` are too generic. There're the `Text`, `SendKeys()` and `Click()` methods/properties (depending your on C#/Java implementation). The solution is to simply wrap all elements inside custom HTML objects which contain specific methods like `ShouldContainValue` or `Type` (okay, that's a one-to-one mapping with `SendKeys()`, but it's a lot less technical!). Instead of
[FindsBy(How = How.CssSelector, Using = ".ux-desktop-taskbar-startbutton")]
private IWebElement startButton;
[FindsBy(How = How.CssSelector, Using = ".other")]
private IWebElement whatever;
You'd find code like
[FindsBy(How = How.CssSelector, Using = ".ux-desktop-taskbar-startbutton")]
private HTMLSubmitButton startButton;
[FindsBy(How = How.CssSelector, Using = ".other")]
private HTMLInputBox whatever;
In java, this is not that difficult. Normally all fields annotated with FindsBy are filled in via reflection with `PageFactory.InitElements()`. (warning: this creates proxies and does not yet actually do the lookup in the DOM tree. This is a good thing, as filling the fields usually happens inside the constructor of a page object.). `initElements` returns the filled page, you can do a few things from there:
- postprocess the page and decorate your fields
- create your own page factory and create your own fields, wrapped around the webdriver proxies
In C#, you're in trouble - the class is sealed, and the proxy classes are internal. Creating your own factory is possible, but produces fuzzy code:
internal class PageFactory
{
private PageFactory()
{
}
private static By FindsByAttributeToBy(FindsByAttribute attribute)
{
return (By) typeof (FindsByAttribute).GetProperty("Finder", BindingFlags.NonPublic | BindingFlags.Instance).GetValue(attribute);
}
public static void InitElements(IWebDriver driver, object page)
{
foreach (FieldInfo field in FindAllFieldsAndProperties(page.GetType()))
{
Attribute[] findsByAttribs = Attribute.GetCustomAttributes(field, typeof (FindsByAttribute), true);
if (findsByAttribs.Length > 0)
{
var findsByAttribute = (findsByAttribs[0] as FindsByAttribute);
if (field.FieldType == typeof (IWebElement))
{
field.SetValue(page, FindElement(driver, FindsByAttributeToBy(findsByAttribute)));
}
else if (typeof (IEnumerable).IsAssignableFrom(field.FieldType))
{
field.SetValue(page, FindElements(driver, FindsByAttributeToBy(findsByAttribute)));
}
}
}
}
private static IWebElement FindElement(IWebDriver driver, By by)
{
// warning: create WebProxyElement instead of directly doing a lookup
return driver.FindElement(by);
}
private static IReadOnlyCollection<IWebElement> FindElements(IWebDriver driver, By by)
{
// warning: create WebListProxyElement instead of directly doing a lookup
return driver.FindElements(by);
}
private static IEnumerable<FieldInfo> FindAllFieldsAndProperties(Type type)
{
var list = new List<FieldInfo>();
list.AddRange(type.GetFields(BindingFlags.Instance | BindingFlags.Public));
for (; type != (Type) null; type = type.BaseType)
{
list.AddRange(type.GetFields(BindingFlags.Instance | BindingFlags.NonPublic));
}
return list;
}
}
If you have a keen eye, you notice a few things:
- caching of the attribute wouldn't work anymore. The default C# WebDriver implementation is fuzzy and I didn't want to copypaste code I won't use.
- proxying won't work anymore, you'd have to use reflection to instantiate internal classes.
- reflection has been used to fetch the `By` instance of the `FindsByAttribute`. Yay.
The above solution is too complex to solve a simple thing. Instead of a custom page factory, in C# we now use extension methods on `IWebElement`. Another possibility would to create wrapper objects on-the-fly but you'd still have to map the "raw" web elements on page objects.

View File

@ -0,0 +1,49 @@
---
title: Faking domain logic
bigimg: /img/Faking domain logic.jpg
date: '2014-09-23'
subtitle: Using C# extensions to create the illusion of domain logic
tags: [ 'tech', 'domain driven design', 'C#', 'code smells' ]
---
Sometimes, life is just a little bit more difficult than you imagined the day before. Sometimes, you have to work on a legacy codebase with custom frameworks rooted so deeply you're having lot's of trouble trying to build around them. To make it a bit more concrete, here's an example: imagine a separate DLL for interfaces and a separate DLL for the implementation. This decision was made because we use NHibernate as a data mapper and not to write beautiful domain driven design code. As a result, writing domain logic methods on our "domain" objects is impossible because we have three implementations.
There are a few solutions. The first would be the classic solution, called a "service layer" where you simply dump random "domain" logic. Done.
Then there's a slightly better solution involving abstract classes. But it makes things more complicated, and it's not always allowed to inherit from those classes. Besides, in which DLL should you put them? Dependency Entanglement. Welcome to hotel Cali--- erm, DLL Hell.
So, option number three: use extensions on those interfaces.
public interface IVacancy
{
public string Description { get; set; }
}
would have these implementations:
public class FulltimeVacancy : IVacancy
{
public string Description { get { // ... }; set { field = value; }}
}
public class HalftimeVacancy : IVacancy
{
public string Description { get { // ... }; set { field = value; }}
}
If I'd want to implement something like `RetrieveLocation()` based on for example google maps and other properties, I can place the entry point in an extension class:
public static class IVacancyExtensions
{
public static string RetrieveLocation(this IVacancy vacancy)
{
// do your thing
}
}
Using the right namespace imports, I'm able to call the above method on any concrete implementation of `IVacancy`, regardless of it's (DLL) location. Now, why would I want to keep code like this as close to the original object as possible? this has multiple reasons:
- It makes code easier to read & refactor.
- It reduces the chance of duplication in another service layer, as people often hit "CTRL+SPACE" to find a method from an object or a piece of logic, and don't go looking in service classes.
- It makes code easier to discuss (since it's also easier to read).
- It's isolated and thus easier to test.
- It avoids a lot of [other code smells](http://martinfowler.com/bliki/CodeSmell.html) (deserves it's own article).

View File

@ -0,0 +1,126 @@
---
title: Integration Testing with SQLite
bigimg: /img/Integration Testing with SQLite.jpg
aliases:
- /integration-testing-with-sqlite/
date: '2013-11-04'
subtitle: Decoupling your integrated database environment from your development.
tags: [ 'tech', 'unit testing', 'sql', 'C#', 'sqlite' ]
---
This article is based on the notes I've collected on [My Wiki](http://brainbaking.com/wiki/code/db/sqlite).
On previous projects I've worked on, development PCs came with a local version of the database scheme. Each DB change also got rolled out to those computers, which enabled us developers to fool around without breaking anything on the development (or test) environment. This is another step closer to happiness, at least for our proxy customers who didn't have to reinsert their test data every time we flushed something from a table. Sometimes though, there's some lame excuse for not having a local database installed:
- We have a lot of stored procedures and it's too hard to duplicate them locally
- We worked like this for years, why would I want a local DB?
- But then my data is out of sync!
- I tried doing that but my manager says I should focus on delivering content
- Blah blah blah
Installing an Oracle XE runtime on your machine might include working around some issues which can take up some time but it's time well invested, compared to multiple developers connecting to one shared database. In any case, there's another possibility: an **in-memory database**, such as [SQLite](http://www.sqlite.org/). This does still require you to keep the upgrade scripts synced, but also enables you to get rid of a lot of annoying things like *foreign key constraints* for testing purposes.
### Integrating SQLite with .NET
Simply use [System.data.SQLite](http://system.data.sqlite.org/index.html/doc/trunk/www/index.wiki). For each OleDb object, there's an equivalent SQLite one in the correct namespace. The only problem is, some of them don't share an abstract object so you'll have to come up with an anti-corruption layer yourself. Create a connection using this connection string:
private SQLiteConnection SqLiteDbConnection()
{
return new SQLiteConnection()
{
ConnectionString = "Data Source=:memory:;Version=3;New=True;DateTimeFormat=Ticks",
Flags = SQLiteConnectionFlags.LogAll
};
}
public void SetupDb()
{
using (var connection = SqLiteDbConnection())
{
connection.Open();
var transaction = connection.BeginTransaction();
var sqLiteCommand = new SQLiteCommand()
{
Connection = (SQLiteConnection) connection,
CommandType = CommandType.Text,
CommandText = GetSchemaCreateSql()
};
sqLiteCommand.ExecuteNonQuery();
transaction.Commit();
}
}
You need to pay attention to the `DateTimeFormat` substring in the connection string as SQLite is "dynamically typed", compared to Oracle. This means it stores dates exactly the same as chars, otherwise you might encounter an error like `"string was not recognized as a valid DateTime"` when executing a select statement.
**Watch out with closing the DB Connection** using an in-memory DB; as this completely resets everything. As soon as you open a connection, you can execute create table commands (read your stored DDL file and do it in bulk).
Your anti-corruption layer between the abstract DB Connection and SQLite/OleDB should expose a few methods. It should be able to query (with or without parameters or providing a `DbCommand`) and possibly stored procedures. This is what I've come up with:
public interface IdbConnection
{
object QueryProcedure(string procedure, IDictionary<string, object> parameters, string outputParameter);
DbParameter CreateParameter(string field, object value);
DbCommand CreateCommand(string query);
DataSet Query(DbCommand command);
DataSet Query(string query);
}
Depending on the implementation, it'll return an `SQLiteCommand` or an `OleDbCommand` instance.
### Creating integration tests, using Record objects
To be able to quickly insert junk in an in-memory table, I came up with a simple object-table mapping which uses reflection to scan for each property of an object, and map that property to a column in a table. Normally you would simply use your domain objects and issue a `save()` or `persist()` call using for instance `NHibernate` but we didn't have anything like that and this was easy to setup.
Create an object for each table in your unit test project, extending `DatabaseInsertable`:
public abstract class DatabaseInsertable
{
protected abstract string GetTable();
public override string ToString()
{
var fieldDict = FieldDictionary();
var fields = "(" + string.Join(",", fieldDict.Keys) + ")";
var values = "(" + string.Join(",", fieldDict.Values) + ")";
return "insert into " + GetTable() + fields + " values " + values;
}
public void Save()
{
DbConnection.Instance.CreateCommand(ToString()).ExecuteNonQuery();
}
private Dictionary<string, string> FieldDictionary()
{
var dictionary = new Dictionary<string, string>();
foreach (var info in this.GetType().GetFields())
{
if (info.GetValue(this) != null)
{
dictionary.Add(info.Name, "'" + info.GetValue(this).ToString() + "'");
}
}
return dictionary;
}
}
For instance:
internal class UnitRecord : DatabaseInsertable
{
public string creator;
public string guid;
protected override string GetTable()
{
return "UNIT";
}
}
Now you can simply issue `new UnitRecord() { creator = "bla"; guid = "lala"; }.Save();` and it's saved into the unit table, yay!

View File

@ -0,0 +1,18 @@
---
title: .NET Memory management VS JVM Memory management
date: '2014-10-24'
subtitle: Increasing your maximum heap size in .NET? Tough luck.
tags: [ 'tag', 'memory management', 'CLR', '.NET', 'JVM' ]
---
Memory management is something to keep in mind when deploying and running applications on top of the JVM. Parameters like `Xmx` and `Xms` are things to juggle with when it comes to finding the perfect balance between too much memory hogging (at app startup) and too little, especially if you're working with heavy duty entity mapping frameworks like Hibernate (and you're not so good at writing fast HQL).
When we bumped into an `OutOfMemoryException` in .NET, I got an Xmx flashback and started searching on how to do the same with the CLR.
Turns out you can't.
You can't set max heap size in .Net unless you host the CLR yourself in a process. ([source](http://stackoverflow.com/questions/301393/can-i-and-do-i-ever-want-to-set-the-maximum-heap-size-in-net))
To control the memory allocations of CLR including the max heap size, you need to use the hosting api to host the clr and specifically use the "Memory manager interfaces", some starter info can be found here [MSDN Magazine, column CLR Inside Out : CLR Hosting APIs](http://msdn.microsoft.com/en-us/magazine/cc163567.aspx)
The heap does indeed keep growing until it can't grow any more. (Obviously this is "after attempting to recover memory through GC, grow the heap".) Basically there isn't nearly as much tuning available in the .NET GC as in Java. You can choose the server GC or the client one, and I think there's an option for turning on/off the concurrent GC (I'll find links in a minute) but that's basically it.
See also:
- [Choosing the right garbage collector for your .NET Application](http://www.atalasoft.com/cs/blogs/rickm/archive/2008/05/14/choosing-the-right-garbage-collector-settings-for-your-application-net-memory-management-part-4.aspx)

View File

@ -0,0 +1,55 @@
---
title: Metaprogramming instead of duplication
bigimg: /img/Metaprogramming instead of duplication.jpg
date: '2014-03-14'
subtitle: convention over duplication, good or bad?
tags: [ 'tech', 'C#', 'java', 'metaprogramming', 'reflection', 'unit testing', 'mocking' ]
---
So... What's up with all that duplication in your unit tests? Let's take a look at a very recognizable pattern when for instance using `RhinoMock` in `C#`:
[TestInitialize]
public void SetUp()
{
dbConfigurationMock = MockRepository.GenerateMock<IDbConfiguration>();
mountPointLoaderMock = MockRepository.GenerateMock<IMountPointLoader>();
userEnvironmentFactoryMock = MockRepository.GenerateMock<IUserEnvironmentFactory>();
userEnvironmentLoaderMock = MockRepository.GenerateMock<IUserEnvironmentLoader>();
// ...
We agreed to suffix each instance variable with 'Mock' if it's a mock. That way, when you scroll down to an actual test case, it's clear to everyone what's what: mocks, stubs, actual implementations, and so forth. So why should I repeat myself again and again but initializing a bunch of mocks using `GenerateMock`?
In Java using Mockito, the `@Mock` annotation automagically instantiates a mock for you, provided you annotated your test class with `@RunWith(MockitoJUnitRunner.class)`. I would like to apply this pattern to MSTest but there's not a single hook to be found where I can plug in my initialization code. Thanks a bunch.
Example taken from [Mockito docs](http://docs.mockito.googlecode.com/)
public class ArticleManagerTest {
@Mock private ArticleCalculator calculator;
@Mock private ArticleDatabase database;
@Mock private UserProvider userProvider;
private ArticleManager manager;
Now, this "problem" is easily solved with a bit of metaprogramming and an abstract class:
- Loop over (private) fields
- Filter out suffixed with 'Mock'
- Initialize.
public abstract class AbstractTestCase
{
[TestInitialize]
public void CreateMocksBasedOnNamingConvention()
{
this.GetType().GetFields(BindingFlags.NonPublic | BindingFlags.Instance).Where(x => x.Name.EndsWith("Mock")).All(InitMock);
}
private bool InitMock(FieldInfo field)
{
field.SetValue(this, MockRepository.GenerateMock(field.FieldType, new Type[]{}));
return true;
}
}
Very easy with `LINQ`. The question is - is metaprogramming or reflection in this case "allowed"? Do you think this is "bad" (because it's implicit), or is the convention of suffixing your fields with 'Mock' good enough? The base test case could also be named something like `MockInitializingTestCase` if that makes you feel better.

View File

@ -0,0 +1,113 @@
---
title: Migrating from Extjs to React gradually
bigimg: /img/Migrating from Extjs to React gradually.jpg
aliases:
- /migrating-from-extjs-to-react-gradually/
date: '2016-01-26'
subtitle: Migrating from Extjs to React gradually
tags: ['tech', 'javascript', 'extjs', 'react' ]
---
We were looking for a few alternatives to our big ExtJS 4 application. Since it's not that easy to completely migrate from one front-end framework to the next, a possible solution would be to start developing new parts in another framework. There's a lot of domain logic spread in Ext views and controllers - which shouldn't be there, we are well aware of that. Let's call it "legacy" :-)
The application right now uses Extjs as UI and C# as backend, and lets ext do the loading of the views/controllers (living in app.js like most ext applications). There's no ecosystem set up like modern javascript applications - build systems like Grunt, Gulp, node package managers, Browserify, ... are all not used. We do use sencha command to minify stuff. To be able to develop new modules without having to worry about extjs, one of the possibilities would be to use iframes. That enables us to (scenario) test the module using it's own routing. It's wrapped inside an Extjs view with an iframe:
Ext.define('App.view.utilities.desktop.ReactWindow', {
extend: 'Ext.form.Panel',
alias: 'widget.App_view_utilities_desktop_ReactWindow',
bodyPadding: 5,
width: 600,
layout: {
type: 'vbox',
align: 'stretch'
},
initComponent: function() {
var me = this;
var dynamicPanel = new Ext.Component({
autoEl: {
tag: 'iframe',
style: 'border: none',
src: me.url
},
flex: 1
});
Ext.apply(me, {
title: 'React',
defaults: {
labelWidth: 120
},
items: [dynamicPanel]
});
me.callParent();
}
});
When the module is triggered in the main app, we simply add the panel to the desktop:
this.addPanel(Ext.create('App.view.utilities.desktop.ReactWindow', {
url: 'react/mod/someurl/'
}));
Our app structure in the GUI folder would be something like this:
[GUI]<br/>
* global.asax<br/>
* default.aspx<br/>
**** [app] -> extjs<br/>
**** [react] -> reactjs<br/>
That's simple enough. But how would one be able to open new Ext panels from within the React sub-application? That would be done via custom events thrown to the parent window. Catching these is just a matter of adding this to some controller in Extjs:
window.addEventListener('react', function(e) {
me.onReactEvent(e.detail, e);
});
The `detail` property is part of a custom event, thrown in a react component. This below might be some cell component, taken from the [fixed-data-table](https://facebook.github.io/fixed-data-table/) example:
class MyLinkCell extends React.Component {
clicked(e) {
const el = e.target;
const eventObj = {
'detail': {
'type': 'downloadlink',
'url': 'react/some/detail/url'
}
};
console.log('clicked - "react" event thrown:');
console.dir(eventObj);
if(window.parent) {
window.parent.dispatchEvent(new CustomEvent('react', eventObj));
}
}
render() {
const {rowIndex, field, data} = this.props;
const link = data[rowIndex][field];
return (
<Cell>
<a onClick={this.clicked} href='#'>{link}</a>
</Cell>
);
}
}
Of course this is more or less the same when for instance using Angular2 instead of React, the custom event is part of the JS standard, see [Creating and triggering events](https://developer.mozilla.org/en-US/docs/Web/Guide/Events/Creating_and_triggering_events) from MDN.
To be able to use source maps in conjunction with Browserify/Watchify, I had to tweak some parameters in package.json:
`watchify index.js --verbose -d -t babelify --sourceMapRelative . --outfile=bundle.js`
Things we still need to research:
- How well does React compare to Angular2 in terms of components? For instance react doesn't include [routing](http://www.kriasoft.com/react-routing/) by default. We'll need to rewrite some already-extjs-custom components in the target framework.
- How should we include the build ecosystem (npm, gulp/grunt/browserify, ...) into our C# build solution and Teamcity build? Will [http://reactjs.net/](http://reactjs.net/) help for instance?
- Can we use [http://reactjs.net/](http://reactjs.net/) to render serverside components?
- Which build tool should we use? We're being overwhelmed by choice: bower/npm as package manager, I've seen stuff like [Webpack in conjunction with React](http://www.christianalfoni.com/articles/2015_10_01_Taking-the-next-step-with-react-and-webpack), ... The list is huge if you've not kept up with the JS technology news.
One of the things we liked a lot was typescript or ES6 and the ability to use `=> ()` and promises. Enabling this requires a transpiler or a polyfill like [Babel JS](https://babeljs.io/), but maybe this as a build step in sencha command will also ease some pain we're having with the current Ext code.

View File

@ -0,0 +1,150 @@
---
title: Bye autotools hello Scons
bigimg: /img/Bye autotools hello Scons.jpg
date: '2014-03-26'
subtitle: Building C++ projects with Scons
tags: [ 'tech', 'C++', 'python', 'build ecosystem' ]
---
Remember this?
- `./configure`
- `make`
- `make install`
That's not so bad, as long as you have the right compiler and linker flags configured, depending on the target OS. The real problem, however, is trying to figure out how to alter something if you didn't write the `Makefile` yourself. Or if you in fact did write it, but it was some time ago. Two days. No, four hours.
### The problem
Try to study the autoconf and automake flow diagram, explained [on Wikipedia: the GNU build system](http://en.wikipedia.org/wiki/GNU_build_system). Headache coming up? Suppose we would like to use these ... uhm, "thingies", for a simple C++ project.
First, let me define simple:
- It has some (shared) library dependencies
- The source lives in `src`
- Since it's obviously written the TDD way, the tests live in `test`
Onward, to the `Makefile` creation station!
This is a sample file, from the [Google Test Makefile](https://code.google.com/p/googletest/source/browse/trunk/make/Makefile):
GTEST_DIR = ..
USER_DIR = ../samples
CPPFLAGS += -isystem $(GTEST_DIR)/include
CXXFLAGS += -g -Wall -Wextra -pthread
TESTS = sample1_unittest
GTEST_HEADERS = $(GTEST_DIR)/include/gtest/*.h \
$(GTEST_DIR)/include/gtest/internal/*.h
all : $(TESTS)
clean :
rm -f $(TESTS) gtest.a gtest_main.a *.o
GTEST_SRCS_ = $(GTEST_DIR)/src/*.cc $(GTEST_DIR)/src/*.h $(GTEST_HEADERS)
gtest-all.o : $(GTEST_SRCS_)
$(CXX) $(CPPFLAGS) -I$(GTEST_DIR) $(CXXFLAGS) -c \
$(GTEST_DIR)/src/gtest-all.cc
gtest_main.o : $(GTEST_SRCS_)
$(CXX) $(CPPFLAGS) -I$(GTEST_DIR) $(CXXFLAGS) -c \
$(GTEST_DIR)/src/gtest_main.cc
gtest.a : gtest-all.o
$(AR) $(ARFLAGS) $@ $^
gtest_main.a : gtest-all.o gtest_main.o
$(AR) $(ARFLAGS) $@ $^
sample1.o : $(USER_DIR)/sample1.cc $(USER_DIR)/sample1.h $(GTEST_HEADERS)
$(CXX) $(CPPFLAGS) $(CXXFLAGS) -c $(USER_DIR)/sample1.cc
sample1_unittest.o : $(USER_DIR)/sample1_unittest.cc \
$(USER_DIR)/sample1.h $(GTEST_HEADERS)
$(CXX) $(CPPFLAGS) $(CXXFLAGS) -c $(USER_DIR)/sample1_unittest.cc
sample1_unittest : sample1.o sample1_unittest.o gtest_main.a
$(CXX) $(CPPFLAGS) $(CXXFLAGS) -lpthread $^ -o $@
This first builds the gtest_main.a binary, to be able to link that with our test after the source (sample1.o) has been is built. The syntax is clumsy, simple files require me to have a deep knowledge how flags and linking work, and I don't want to specify everything in one block.
As esr in his blog post [Scons is full of win today](http://esr.ibiblio.org/?p=3089) said, it's a maintenance nightmare. What to do?
There are a few alternatives which aim to cover everything autotools does, such as `QMake` from Trolltech or `CMake` (that actually generates Makefiles. You're not helping, CMake!). Or, one could go for [Scons](http://scons.org/).
### build your software, better.
Scons starts with a single `SConstruct` file, which acts as the makefile. You can bootstrap the default build target using the `scons` command. (cleaning with `scons --clean`). The big deal here is that the contents of that file is simply python (2.7, I know)!
Want to write a utility function to gather all your `cpp` files? Fine, go ahead, `def mystuff():` (you do know this already exists, right? Use `Glob()`) Want to unit test these, and include them? Done. Want to split up everything per source directory? Use `SConscript` files and include these from within your root `SConstruct` using `SConscript('file', 'envVarToExport')`.
This is my blueprint construct file:
env = Environment(CXX = 'g++')
gtest = env.SConscript('lib/gtest/SConscript', 'env')
src = env.SConscript('src/SConscript', 'env')
out = env.SConscript('test/SConscript', 'env gtest src')
# output is an array with path to built binaries. We only built one file - run it (includes gtest_main).
test = Command( target = "testoutput",
source = str(out[0]),
action = str(out[0]) )
AlwaysBuild(test)
Things to note:
- Scons works with [Environments](http://www.scons.org/doc/2.3.1/HTML/scons-user.html#chap-environments) which can be shared and cloned (see below)
- You can share variables with the second parameter
- Executing after a build also works, passing in the result of conscripts.
- Ensure to always build your test with `AlwaysBuild()`
This is the conscript which builds google test:
Import('env')
env = env.Clone(CPPPATH = './:./include')
env.Append(CXXFLAGS = ['-g', '-Wall', '-Wextra', '-pthread'])
gtest = env.Library(target = 'gtest', source = ['src/gtest-all.cc', 'src/gtest_main.cc'])
Return('gtest')
Things to note:
- Fetch the shared variables with `Import()` and return stuff with `Return()` (it's a function)
- specify flags all you want.
- Building something? `Program()`, `Library()` or `SharedLibrary()`.
Source:
Import('env')
env = env.Clone(CPPPATH = './')
src = env.Library(target = 'wizards', source = Glob('*.cc'))
Return('src')
Things to note:
- `Glob()` auto-reads all files in the current dir.
And finally, test, linking both source and google test:
Import('env', 'gtest', 'src')
env = env.Clone()
env.Append(LIBPATH = ['#lib/gtest', '#src'])
env.Append(LIBS = [gtest, src])
out = env.Program(target = 'wizards_unittests', source = Glob('*.cc'))
Return('out')
Things to note:
- Use the hashtag `#` to point to the root dir where the `SConstruct` file resides.
- Linking is as simple as providing `LIBS` and the right path.
So where does that leave us? Yes there's still "syntax" to be learned, even if you're a seasoned python developer; you need to know which function to use for what, that's what the excellent [scons doc](http://www.scons.org/doc/2.3.1/HTML/scons-user.html) is for. I know it made my life a lot easier while trying to do something simple and this is only the tip of the iceberg. Scons is relatively popular according to Stack Overflow, the documentation is excellent and if all else fails you can write your own garbage in a full-fledged dynamic language.
The only really irritating bit is the python 2.7 dependency, so don't forget to use [virtualenv](https://pypi.python.org/pypi/virtualenv).

View File

@ -0,0 +1,105 @@
---
title: Unit Testing Extjs UI with Siesta
bigimg: /img/Unit Testing Extjs UI with Siesta.jpg
aliases:
- /unit-testing-extjs-ui-with-siesta/
date: '2014-12-23'
subtitle: An attempt to replace instable Webdriver tests with Siesta UI tests
tags: ['tech', 'unit testing', 'javascript', 'extjs', 'siesta']
---
### WebDriver & js-heavy frameworks ###
Writing scenario tests for javascript-heavy UI webpages can be really difficult. It gets complicated pretty quickly if you're using a lot of async calls or a lot of javascript-heavy UI components. On our current project, we use Extjs as the UI layer in a single-page aspx page to bootstrap our Extjs app. Extjs is a (heavyweight) javascript framework for creating windows, panels, grids, buttons, menus, ... like you're used to when using client/server desktop applications. You define components on a view, behavior on a controller, and data and the way it's loaded on the model.
The problem with Javascript-heavy frameworks like this is that if your team does not have a lot of experience using JS in general, it can get extremely messy and cluttered. Which it did, coupled with a lot of regression (a misplaced ";" could break an entire part of the application), we needed an automated way to catch up with those bugs.
Since I have a lot of experience with WebDriver, we started using it to write scenario tests when the application is deployed. A test should emulate customer behavior: click on a menu, expect a window to be opened, fill in a form and expect something else to happen. It's not isolated but tests everything together.
WebDriver is great, but since a lot of javascript events are happening in the background it's very difficult to write a easily usable DSL to manipulate the UI. One has to wait for ajax calls to finish, for DOM elements to appear or disappear, and so on. Tests became instable and failed sometimes, even sometimes on the CI build but never on your development environment. It takes more and more time to find & fix those things.
### A possible solution: Siesta ###
[Siesta](http://www.bryntum.com/products/siesta/) is a product from Bryntum especially written to unit test Extjs applications, focussing on the UI. Sounds nice, so we decided to check it out as a possible alternative to WebDriver. As the website states:
> Siesta is a JavaScript unit testing tool that can help you test any JavaScript code and also perform testing of the DOM and simulate user interactions. The tool can be used together with any type of JavaScript codebase jQuery, Ext JS, NodeJS, Dojo, YUI etc. Using the API, you can choose from many types of assertions ranging from simple logical JS object
Sounds good, right?
The setup isn't too difficult, after a few hours of fiddling I managed to bootstrap our Extjs application using this index.js file:
var Harness = Siesta.Harness.Browser.ExtJS;
Harness.configure({
title : 'Test Suite',
loaderPath : {
'Ext': '../extjs',
'Ext.ux': '../extjs/ux',
'MyApp': '../app'
},
preload : [
// version of ExtJS used by your application
'../extjs/resources/css/ext-all.css',
'../resources/css/workb.css',
// version of ExtJS used by your application
'../extjs/ext-all-debug.js',
'./app-vars.js',
{
text: "Ext.Loader.setConfig({ 'Ext': '../extjs', 'Ext.ux': '../extjs/ux', 'MyApp': '../app' })"
},
'../extjs/overrides/javascript-overrides.js',
'../extjs/overrides/PFW-overrides.js',
'../app/app.js'
]
});
Harness.start(
'tests/001_sanity.t.js',
'tests/002_window.t.js'
);
Some pitfalls: `loaderPath` isn't evaluated in the preload so you have to reset it with `Ext.Loader.setConfig()` and I had to alter our app.js file. Our directory structure looks like this:
root
-- app
-- extjs
---- ux
-- siesta
---- tests
So you have to watch out for relative paths like `appFolder` in app.js:
Ext.application({
name: 'MyApp',
appFolder: (_siesta ? '../' : '') + 'app',
After that, you can start writing tests. Looking at the examples, the test flow looks a lot like our current WebDriver tests (wait for rows present, wait x seconds, click on this, do that). Here's a simple test to create a view and check if the grid has some rows:
StartTest(function(t) {
t.diag("Creating some window");
var view = Ext.create('MyApp.view.SomeOverview', {
renderTo: Ext.getBody() // required
});
var grid = view.down("grid");
t.chain(
{ waitFor : 'rowsVisible', args : grid }
);
});
![siesta view test in action]({{urls.media}}/siesta.png)
Siesta also comes with it's downsides though.
- JS Test code is really messy. Chaining, async calls, ugly data setup for stores, ... A simple test can get complicated fast and requires advanced JS knowledge not everybody in our team has.
- `waitFor` exposes the same problems we have with our current WebDriver tests, so it's not that much of an improvement
- Test data setup cannot be reused from our backend integration tests (we use the builder pattern there to create data in the DB)
- Creating a view to test doesn't test the controller and vice versa. Still too low level for us.
The biggest problem is that it's still more an integration/unit test than a scenario test and quite tightly coupled to your implementation. Since our implementation is far from perfect, Siesta is not the optimal solution for us. For example, we create stores inside our views and load them in `initComponent()`. No way to provide a stub store with some dummy data. We'd have to refactor 200+ views to create tests. Of course tests should be written before the implementation...
If you would like to know more about Siesta or JS BDD testing, take a look at
- [Pivotallabs blog post](http://pivotallabs.com/sencha-touch-bdd-part-5-controller-testing/)
- [Siesta API doc: Getting started](http://www.bryntum.com/docs/siesta/#!/guide/siesta_getting_started)

View File

@ -0,0 +1,143 @@
---
title: Unit Testing Stored Procedures
bigimg: /img/Unit Testing Stored Procedures.jpg
date: '2013-10-10'
subtitle: And a pragmatic guide on how to include them into your build system.
tags: [ 'tech', 'unit testing', 'sql']
---
This article is based on the notes I've collected on [My Wiki](http://brainbaking.com/wiki/code/db/sql).
Test Driven Development (or TDD), it's one of those buzz words which usuallly appear in the same sentence with "scrum" or "XP". But in practice, I've seen few people actually applying it all the way through. What do I mean by that? You're probably very familiar with, say Java or .NET, and you know how to write unit tests in that language using your beloved IDE. That's a good start, right. Maybe you might even do it the test-first way: writing a failing test (letting it fail for the right reason), writing the implementation and maybe some refactoring. Red, Green, Refactor.
But what do you do when you need to step out of your language comfort zone to write some Javascript on the client side? Do you copypaste stuff or try to apply the same techniques as you're used to? You might have heard from test frameworks like [Jasmine](http://pivotal.github.io/jasmine/) and use these. Also good for you! Client side development is very popular, but what about SQL? Do you write tests for stored procedures? I thought so. There are plenty of frameworks available to help you in doing this, for instance [SQL Developer](http://docs.oracle.com/cd/E15846_01/doc.21/e15222/unit_testing.htm) which I used because it's already installed on every developer's PC and has a "friendly" interface.
![sql dev unit test](http://brainbaking.com/wiki/_media/code/db/unittest_sqldev.png)
Once you create a "test repository", SQL Developer will create test tables to store it's unit test descriptions and results, prefixed by "UT_". You can specify whether you'd like to create a new scheme for it or not. When creating a new test, the tool asks you a couple of questions:
1. What do you want to insert or execute before the test? (Setup phase)
2. What stored procedure do you want to execute? (Execute system under test phase)
3. What should the result of the procedure be, or execute a query and check it's results? (Verify phase)
4. What do you want to insert or execute after the test? (Teardown phase)
You can reuse the parts to be executed in the different phases for another unit test, yay! This data will also be stored in the predefined tables.
### But what about existing data when inserting new stuff?
use this as teardown:
ROLLBACK;
### But how do you execute a stored procedure with IN/OUT REF CURSOR arguments?
SQL Developer has some trouble executing that, indeed. In this case, we use a little trick:
1. Create a dummy stored procedure:
create or replace
PROCEDURE UT_DUMMY AS
BEGIN
NULL;
END UT_DUMMY;
2. Execute the dummy procedure in the SUT phase.
3. Use the verify phase to call the actual to test procedure yourself, and do your verification stuff yourself:
DECLARE
P_USERID NUMBER;
MY_P_CURSOR SCHEMA.PACKAGE.Cursor;
cursor_element MY_P_CURSOR.SCHEMA.CursorType;
found boolean;
BEGIN
P_USERID := 11;
found := false;
PACKAGE.MYPROCEDURE(
P_USERID => P_USERID,
P_CURSOR => MY_P_CURSOR
);
WHILE TRUE LOOP
FETCH MY_P_CURSOR INTO cursor_element;
EXIT WHEN MY_P_CURSOR%NOTFOUND;
IF cursor_element.columntocheck = 'My value' THEN
found := true;
END IF;
END LOOP;
IF found = false THEN
raise_application_error(-20000, 'Your error message in here!');
END IF;
END;
### Okay but what about integrating the exeuction of these tests in my build system?
You can use the commandline utility provided by SQL Developer to execute a test or a suite:
ututil -run -suite -name [name] -repo [repo] -db [db] -log 3
It's very interesting to dynamically import and export tests using "-imp" and "-exp", and creating one suite using this PL/SQL:
SET serveroutput ON;
delete from ut_suite_items;
delete from ut_suite;
DROP SEQUENCE ut_suite_items_seq;
CREATE SEQUENCE ut_suite_items_seq
MINVALUE 0
MAXVALUE 999999999999999999999999999
START WITH 0
INCREMENT BY 1;
DECLARE
suiteid VARCHAR2(900) := 'ALL';
utid VARCHAR2(900);
cursor tableCursor is SELECT UT_ID FROM UT_TEST;
BEGIN
dbms_output.enable(10000);
DBMS_OUTPUT.PUT_LINE('Creating one test suite to rule them ALL...');
insert into ut_suite(ut_sid, coverage, name, created_on, created_by, updated_on, updated_by)
values(suiteid, 0, suiteid, null, null, null, null);
open tableCursor;
fetch tableCursor into utid;
WHILE (tableCursor%FOUND) LOOP
insert into ut_suite_items(ut_sid, ut_id, ut_nsid, run_start, run_tear, sequence, created_on, created_by, updated_on, updated_by)
values (suiteid, utid, null, 'Y', 'Y', ut_suite_items_seq.nextval, null, null, null, null);
fetch tableCursor into utid;
END LOOP;
close tableCursor;
commit;
DBMS_OUTPUT.PUT_LINE('SUCCESS - test suite created!');
END;
/
It creates only one suite called 'ALL' which can then be executed. The commandline utility will output "UT_SUCCESS" or throw some kind of exception if one of the tests failed.
### I still get errors using ututil, some ConnectException?
the utility cannot handle any TNS connections you've entered in SQL Developer. Change these to regular connection strings and all will be well. Yes it's a huge disadvantage, and yes the connection settings are stored in your locally installed SQL Developer instance, which also kind of sucks. We needed to install SQL developer on the Build integration PC and configure the same connections within it.
### What about versioning? The tests are stored in my DB, but it doesn't evolve as quickly as the code does!
Right, that's where the import/export thing comes in. We store the actual unit tests in XML format inside our regular source control system, next to the "other" unit tests (in this case in .NET). Every time someone writes a unit test using SQL developer, it extracts that test using:
ututil -exp -test [name] -file [file] ...
which creates an XML file. Executing the tests happen within a wrapper .NET test class, which goes through some steps to setup the DB system correctly:
1. Cleanup all UT_TEST* and UT_SUITE* tables which would contain the acutal tests.
2. Loop through all XML files, and impor them one by one (they get inserted into the cleaned tables)
3. Generate the 'ALL' unit test suite - see PL/SQL above.
4. Execute the test suite using ututil and parse the results from the command line.
That's as far as our imagination and budget goes. We have a stable system which is able to version the XML files - inserting the test data is still dependant on the actual state of the database. One could explore the dynamic creating of tables the stored procedures use, but as our codebase is legacy (read: really really old stuff), we decided not to invest too much time in that.

View File

@ -0,0 +1,40 @@
---
title: 'Unit testing in Legacy Projects: VB6'
date: '2016-12-27'
tags: [ 'tech', 'unit testing', 'VB6' ]
---
Thanks to the [Postmodern VB6](https://ihadthisideaonce.com/2015/05/13/postmodern-vb6-a-quick-start-with-simplyvbunit/) article I've found on the internetz, I decided to give [SimplyVBUnit](simplyvbunit.sourceforge.net) a try. My job requires sporadic visual basic 6 code changes in the big legacy project we're converting to C#. It's an administrative system bound to Belgium laws so as you can imagine they change every few months and the old software still has to be complaint to those crazy new rules. As a result, we sometimes dabble in VB6 code. It feels more like drowning, really.
Unit testing is what keeps me from rage-quitting on every project. The SimplyVBUnit syntax is quite nice if you're used to writing NUnit tests: they also work with `Assert.That` for instance:
```vb
Public Sub MyTestMethod_WithSomeArg_ShouldReturn45
Dim isType As Boolean
isType = MyTestMethod(arg1)
Assert.That isType, Iz.EqualTo(45)
End Sub
```
![simply vb unit screenshot](/img/simplyvbunit.png)
The test code is very readable thanks to the [NUnit](https://nunit.org/index.php?p=documentation) influence on SimplyVBUnit. The package is very easy to install, but there are a few gotcha's.
You need to create a separate VBP file (Visual Basic Project) which acts as your UnitTest project with a reference to the SimplyVBUnit package. That's easy enough, but it's a project. That means it can't reference other projects! Our software is basically one large project with heaps of muddy code. Compiling the EXE and referencing that one is not an option for us. That leaves us with a few alternatives:
- Package the test runner and the dependency in your production code. (Hmmm...)
- Create a DLL project and move the test code to the DLL. This requires another build step in our already-too-long-manual-deployment procedure. Nope.
- Create a group (vbg), include both projects, and include modules/forms/class modules to be tested in the unit test project as an existing file. This means both projects will include the same source files. SourceSafe actually notices this if you have a file checked out and will ask you to update the "other" file in the second project.
The group makes it possible to open everything at once. Unit tests live in a subfolder. This is our vbg file:
```
VBGROUP 5.0
Project=program.vbp
StartupProject=UnitTests\UnitTests.vbp
```
Utilizing two projects in one group means switching between both as a startup project. One could use the group to develop and start tests but the vbps for debugging or so. It's all still fairly new for us so we'll see where this will end.
Unit tests are useless if they aren't run (automatically). At this moment we try to avoid coding anything in VB6 at all. If we do, we run the tests manually. At least some parts of the code are tested without bootstrapping the whole application and plowing through various forms to get to the part where you actually changed something...

View File

@ -0,0 +1,54 @@
---
title: Visual Studio 2012 for Eclipse users
bigimg: /img/Visual Studio 2012 for Eclipse users.jpg
date: '2013-10-14'
subtitle: Trying to fill the gap of missing features in VStudio.
tags: [ 'tech', 'visual studio', 'eclipse']
---
When switching over to a new editor and new language, I can sometimes get frustrated by missing features I got (very) attached to. This excludes the obvious difference in shortcut keys.
### Shortcuts and refactoring tools ###
One plugin to rule them all: [ReSharpner](http://www.jetbrains.com/resharper/). This productivity tool brings back the incredible development speed to the Visual Studio platform. You can almost map the eclipse (or IntelliJ, since they guys from JetBrains developed it) keys to the ReSharpner keys. If you're used to quickly refactor out variables, introduce classes from parameters or create test classes, you'll be in heaven.
The following shortcuts can be mapped (you're welcome):
| **Eclipse shortcut** | **ReSharpner shortcut** | **description** |
|-----------------------------------|------------|--------|
| CTRL+D | CTRL+L | remove line |
| ALT+DOWN | CTRL+D | duplicate line |
| CTRL+SPACE (CTRL+ENTER) | CTRL+SPACE (TAB) | code completion, select in combobox |
| ALT+SHIFT+UP/DOWN | CTRL+ALT+LEFT/RIGHT | Extend/Shrink selection |
| CTRL+SHIFT+/ | CTRL+ALT+/ | commend line |
| CTRL+SHIFT+1 | ALT+ENTER | quick fix |
| ALT+UP/DOWN | CTRL+SHIFT+ALT+UP/DOWN | move line |
| CTRL+SHIFT+O | CTRL+E, (C)/F | organize imports (and format etc, F = silent) |
| CTRL+F11 | CTRL+U, U | rerun last |
| CTRL+O | ALT+\ | Go to file member |
| CTRL+SHIFT+G | CTRL+SHIFT+ALT+F12 (SHIFT+F12) | find usages |
| F3 | F12 | go to definition |
| CTRL+SHIFT+. | SHIFT+ALT+PGDN | go to next error |
| CTR+, | SHIFT+ALT+PGUP | go to previous error |
| ALT+SHIFT+I | CTRL+R, I | inline variable |
| ALT+SHIFT+R | CTRL+R, R | rename |
| ALT+SHIFT+M | CTRL+R, M | extract method |
| ALT+SHIFT+C | CTRL+R, S | change method signature |
| CTRL+SHIFT+B | F9 | toggle breakpoint |
| CTRL+M | SHIFT+ALT+ENTER | toggle full screen mode |
|-----------------------------------|------------|--------|
Other interesting links:
- [Default keymap PDF overview](http://www.jetbrains.com/resharper/docs/ReSharper70DefaultKeymap_IDEA_scheme.pdf)
- [IntelliJ keymap PDF overview](http://www.jetbrains.com/resharper/docs/ReSharper70DefaultKeymap_IDEA_scheme.pdf)
### Comparing files with each other ###
Simply comparing two files within the editor can be a pain - the easiest way to do it in Eclipse is just select both files, rightclick and select "compare". No such option here. You can compare a file with a previous version from TFS, but not two physically different files, weird. Install [VSCommands](http://vscommands.squaredinfinity.com/) and that problem is also solved:
![compare files in vstudio]({{urls.media}}/compare_files_vstudio2012.png)
It uses the built-in VS2012 comparison window, which is quite nice.

View File

@ -0,0 +1,96 @@
---
title: Webdriver Exception Handling
date: '2015-01-14'
subtitle: What should you do when something goes wrong with your scenario tests
bigimg: /img/Webdriver Exception Handling.jpg
tags: [ 'tech', 'unit testing', 'C#', 'webdriver', 'scenario testing' ]
---
As the previous post indicated, we're trying to stabilize our scenario tests created with WebDriver. One of the things we did was trying to capture as much data as possible if something goes wrong. Something like a typical `ElementNotFoundException`, or the less common `StaleElementException` (detached from DOM after evaluation) - these things can be hard to trace if you don't run the tests locally. We also stumbled upon the "it works on my machine" problem - tests succeeding on one development machine but not on the other - mostly related due to timing issues.
So, what should you do when something goes wrong?
- capture what happened! (screenshot)
- capture what happened! (exception stacktrace logging)
- capture what happened! (serverside logging)
WebDriver has a `GetScreenshot()` method you can use to dump an image to a file on exception. We used a bit of pointcut magic using PostSharp to automagically handle every exception without manually having to write each `try { }` clause.
WebDriver().GetScreenshot().SaveAsFile(fileName + ".png", ImageFormat.Png);
After saving the image, we also capture the exception and some extra serverside logging:
File.WriteAllText(fileName + ".txt",
"-- Resolved URL: " + ScenarioFixture.Instance.ResolveHostAndPort() + Environment.NewLine +
"-- Actual URL: " + ScenarioFixture.Instance.Driver.Url + Environment.NewLine +
"-- Exception Message: " + ex.Message + Environment.NewLine +
"-- Stacktrace: " + Environment.NewLine + ex.StackTrace + Environment.NewLine + Environment.NewLine +
"-- Service log: " + Environment.NewLine + ReadServiceLogFromDeployedApp());
Because the webservice is deployed somewhere else (scenario tests run againsst the nightly build IIS webserver), we need to access the logfiles using a ´GET´ call, done with RestSharp:
private static string ReadServiceLogFromDeployedApp()
{
var restClient = new RestClient(ScenarioFixture.Instance.ResolveHostAndPort());
var restRequest = new RestRequest("log/servicelog.txt");
restRequest.AddHeader("Content-Type", "text/plain");
restRequest.AddHeader("Accept", "text/plain");
var response = restClient.Execute(restRequest);
return response.Content;
}
Now, to easily access those files (the screenshot and the written log for each failing test), we wrap the exception in another exception containing a direct link to both files. That enables every developer to simply browse to the failing test on our CI env (teamcity) and simply click on the link!
To be able to do that, combined with the pointcut, implement the `OnException()` hook and call the above code:
[Serializable]
[ScenarioExceptionAspect(AttributeExclude = true)]
public class ScenarioExceptionAspect : OnMethodBoundaryAspect
{
public override void OnException(MethodExecutionArgs args)
{
var exceptionFileName = Directory.GetCurrentDirectory() + @"/" + WebDriverExceptionHandler.Handle(args.Exception);
exceptionFileName = exceptionFileName.Replace(@"C:", @"file://teamcity/c$");
exceptionFileName = exceptionFileName.Replace(@"\", @"/");
throw new Exception("Scenario test failed"
+ Environment.NewLine
+ " -- Screenshot: " + exceptionFileName + ".png"
+ Environment.NewLine
+ " -- Log: " + exceptionFileName + ".txt", args.Exception);
}
}
This introduces one more problem: what if you want to trigger an exception, something like `ExpectedException(typeof(InvalidArgumentException))`? We'll still end up in our aspect and we'll take a screenshot and dump everything. We fixed this by taking a peek at the live stacktrace. I know it's far from ideal, but it serves it's purpose and works pretty well for the moment.
private static bool ExpectedSomeException(StackTrace trace)
{
const int arbitraryMaxDepthToLookForAttribs = 5;
for (var stackElements = 1; stackElements <= arbitraryMaxDepthToLookForAttribs; stackElements++)
{
if (AnyExpectedExceptionInAttribute(trace, stackElements))
{
return true;
}
}
return false;
}
private static bool AnyExpectedExceptionInAttribute(StackTrace trace, int stackElements)
{
var callingMethod = trace.GetFrame(stackElements).GetMethod();
var anyExpectedExceptionAttrib = callingMethod.GetCustomAttributes(typeof(ExpectedExceptionAttribute), true).Any();
return anyExpectedExceptionAttrib;
}
Every instance of a new `StackTrace` element will contain all stack data from that point on, so create one in the onException method, otherwise remember to look "deeper" or further into the stack itself. Yes we could solve that using recursion instead of with an arbitrary number of elements inside a for loop, but we were trying to solve something else and this stood in the way so naturally the reaction was to not invest too much time.
What's the outcome? This:
> Test(s) failed. System.Exception : Scenario test failed
> -- Screenshot: file://teamcity/c$/buildagents/buildAgentOne/work/10dbfc9caad025f8/Proj/ScenarioTests/bin/Debug/ex-15-01-14-15-56-02.png
> -- Log: file://teamcity/c$/buildagents/buildAgentOne/work/10dbfc9caad025f8/Proj/ScenarioTests/bin/Debug/ex-15-01-14-15-56-02.txt
> ----> System.Exception : Root menu could not be opened after 10 tries?
> at Proj.ScenarioTests.ScenarioExceptionAspect.OnException(MethodExecutionArgs args) in c:\buildagents\buildAgentOne\work\10dbfc9caad025f8\Proj\Proj.ScenarioTests\ScenarioExceptionAttributeHandler.cs:line 36
> ...