Switching to Jira from Asana

Moving towards a more standardized Agile methodology means finding the suitable tool for our processes.

Asana helped us develop new features and track the issues, but we can see some difficulties in organizing it in the way we want.

The main problems with Asana:

  1. inability to easily find what the teams are actually working on
  2. no support for creating and tracking epics (combining multiple stories in order to implement flows and track them across several sprints).

    You can mitigate this by creating different projects, add epics as tasks, stories as sub-tasks, then assign these stories to sprints, but you still don’t have any way of seeing a progress report for a certain epic.
    We may track the epic status if we create it as a project, but we won’t be able to prioritize it as a whole

  3. poor reporting – we have only a burn up chart.
  4. poor overall responsiveness – it freezes and goes down often
  5. hard to use it in a traditional scrum workflow – does not use or enforce any process

On the other hand, Jira is an older tool created by a well known company (Attlasian) with rich experience in project management software.  Attlasian owns Bitbucket also – a service similar to GitHub (Jira integrates well with Bitbucket). Attlasian serves 85 of the Fortune 100.

Jira was designed for teams wanting to enforce a standardized flow.

Main advantages of Jira over Asana:

  1. more mature product
  2. built in support for Scrum and Kanban. Also, you can define your own flow by using a visual representation.
  3. supports epics
  4. you can prioritize entire product backlog including epics in addition to prioritization of individual epic backlog items
  5. easily see active sprints
  6. interactive scrum or Kanban boards (see what’s in progress/done and change status by moving items like you do with the post-its)
  7. supports estimates using several methods (classic time, issue count, business value, story points)
  8. advanced agile reporting (sprint burn-down chart, epic burn-down, velocity, cumulative flow diagram, etc)
  9. can use estimate method (story points for example) to take into account story complexity in reporting
  10. can track time and  let’s you edit remaining time for tasks
  11. supports working with versions
  12. supports components (ex: Database, User Interface, etc
  13. configurable screen types for each type (story, bug)
  14. configurable fields
  15. integrates with github to link issues to commits. Also integrates well with other Atlassian tools like Bitbucket, Confluence, Bamboo.
  16. faster and more reliable

Main advantages of Asana over Jira:

  1. nicer UI/UX. Asana is a newer product.  Every UI interaction is quicker: assign,  add labels, comments, upload file,  change state, set due dates, add followers, etc.
  2. more flexible. Does not impose any flow. This can be either a plus or a minus, depending on what we want.
  3. who is doing the issues more visible, and easier retrieval of the list of items assigned to a person

As a personal impression, It feels very natural to work in Asana, and I have a hard time finding my way in Jira. If I could combine the Asana ease of use and Jira flows and reporting I would say that would be a good choice. For now it seems we need to choose from ease of use against better processes.

Coming from a flexible tool like Asana to something more rigid like Jira will mean we definitely need to follow stricter procedures and some frustrations may arise out of this because some may feel that procedures will stand in their way. That’s why a transition from loose procedures to more rigid ones need to be carefully analyzed.

My recommended workflow using Jira:

  1. Preferably use a single project in order to have a single backlog and prioritize the project from a centralized place
  2. Use Components to organize related items (Broker Area, Employer Area, etc). Components can have Component Leads: people who are automatically assigned issues with that component. Components add some structure to projects, breaking it up into features, teams, modules, sub-projects, and more. Using components, you can generate reports, collect statistics, display it on dashboards, etc. Project components can be managed only by users who have project administrator permissions. They should have unique names across one project. Nothing prevents users from adding issue to more than one component.
  3. Use Epics to group related stories and track flows. Epics or complex stories may be re-organized during the backlog refinement meetings
    Note: There is no easy way to prioritize epics itself. To accomplish this you need to add a KanBan board and filter only epics. This can be used as a Roadmap or as a ScrumBan bucket.
  4. Use Labels as the simplest way to categorize items.  Anyone can create new labels on the fly while editing an item. All project labels are displayed in the Labels tab of the project as a tag cloud. We can have labels like Production emergency, Feature requests, etc
  5. Use parallel sprints (this is experimental feature in Jira but our current process uses parallel sprints)
    Info: https://confluence.atlassian.com/agile/jira-agile-resources/jira-agile-faq/how-do-i-have-multiple-or-parallel-sprints-running-at-the-same-time
    Where to enable this:
  6. use this workflow

  7. use this board configuration:

Widgetize your app! Reusing code needed to show blocks of content in ZF2 with Controller Plugins and child views

So you are developing a ZF2 application and you have a block of content which needs to be insert in several places within your application. Using a forward could work but it renders the whole page, not the part you are interested in. So here comes in the rescue two concepts: controller plugins and child views.

Here is how it looks a controller action with this method:

It looks good, isn’t it ?
“employer” is registered as controller plugin.
We create a parent view called $viewModel. The child view is returned by the plugin method getProfile(). It is then inserted as child to $viewModel and made available as “employerInfo”.

In the view:

And here is a trick to show variables from the child view inside the parent view:

In order to accomplish all this, we need to create the controller plugin:

from inside the plugin method I make the links var available for the view:

plugin code:

Finally I use the PluginLink helper to create the URLs. pluginLink has two parameters.
The first parameter contains the link configuration array, having some keys like: route, param, options.
The second parameter contains a list of variables used to replace those $ placeholders inside the link definition.
$0 is replaced by the first item. $1 is replaced by the second item and so on.

view code:

Notice the line with ‘secondary_id’ => ‘$0’ on the $options definition from controller ? This will instruct the helper to create an url having the link definition in “view” and replace the secondary_id route param with the first array item given (the user id).

Here is an extended example where I pass dynamic query params:
controller code:

the view for this link:


Using controller plugins is an easy method of reusing controller related code, and child views allows for easy reuse of view blocks. You can have in your plugin logic to get the route params and use them to filter results or make decisions. Having a viewModel (child view model) returned allows for html rendering as is, or using only the variables declared within it in order to create a totally different view.

You could use a normal service instead of a controller plugin, but controller plugins are types of services especially provided by the framework for handling controller related logic – you have the getController() method included and they are available on all your controllers.

Simplify handling of tables, entities, forms and validations in ZF2 by using annotations

If you developed any application using ZF2 you may become frustrated of the tedious work of creating boilerplate code for handling common tasks like a simple form which will be validated then saved in a database. The Zend manual recommends creating a table class, an entity class a form class, a validator class, along with the common MVC prerequisites like controller, action, view plus the Zend config stuff for paths, etc. Coming from a convention over configuration world of cakePHP this seems ridiculous.

Here is how you can speed up your workflow while still benefit from all enterprise features and flexibility you like in ZF2:

Annotations are special docblocks which store metadata in PHP classes. These information are available at runtime, unlike regular comment blocks, which are not. Note the difference:


There is no support in the PHP core for annotations, but there are some engines using the reflection API which can be used successfully with annotations. Common choices are the one present in symfony and phpdocumentor. ZF2 includes support for annotation by using it’s AnnotationBuilder class and doctrine/common (a symfony package).

You can add the required package to your project by using composer:
Edit composer.json

then run
php composer.phar install

I am using a TableGateway factory to return a generic table instance or custom table instances if they exists. The table service will take care of the CRUD operations and hydrate the result set.

We start with a base entity:

Here is a basic method to populate the entities with data:

You can hydrate providing an object or an array

Then you can validate your entity like this:

A very nice feature is to automatically hydrate the composed objects as well when using queries with joins. This can be done automatically if you attach this custom hydrator to the result set prototype and prepare the query for this behavior.

Any composed objects will be populated. Other joins will populate a special property called VF (from virtual fields). You can get virtual fields later by using getVF($name = null).

For example if we join ContactDetail it will populate the properties from ContactDetail as well and if we have also an aggregate expression like COUNT(*) then you will find this value in the virtual fields. The purpose of virtual fields is to store any data outside the scope of the entity.


  • ComposedObject: specify another object with annotations to parse. Typically, this is used if a property
    references another object, which will then be added to your form as an additional fieldset. Expects a string
    value indicating the class for the object being composed @ComposedObject("Namespace\Model\ComposedObject") or an array to compose a collection: @ComposedObject({
    "target_object":"Namespace\Model\ComposedCollection", "is_collection":"true", "options":{"count":2}})

    target_object is the element to compose, is_collection flags this as a collection and options can take an array
    of options to pass into the collection.
  • ErrorMessage: specify the error message to return for an element in the case of a failed validation. Expects a
    string value.
  • Exclude: mark a property to exclude from the form or fieldset. This annotation does not require a value.
  • Filter: provide a specification for a filter to use on a given element. Expects an associative array of values,
    with a “name” key pointing to a string filter name, and an “options” key pointing to an associative array of
    filter options for the constructor: @Filter({"name": "Boolean", "options": {"casting":true}}). This annotation
    may be specified multiple times.
  • Flags: flags to pass to the fieldset or form composing an element or fieldset; these are usually used to
    specify the name or priority. The annotation expects an associative array: @Flags({"priority": 100}).
  • Hydrator: specify the hydrator class to use for this given form or fieldset. A string value is expected.
  • InputFilter: specify the input filter class to use for this given form or fieldset. A string value is expected.
  • Input: specify the input class to use for this given element. A string value is expected.
  • Instance: specify an object class instance to bind to the form or fieldset.
  • Name: specify the name of the current element, fieldset, or form. A string value is expected.
  • Object: specify an object class instance to bind to the form or fieldset.
    (Note: this is deprecated in 2.4.0; use Instance instead.)
  • Options: options to pass to the fieldset or form that are used to inform behavior – things that are not
    attributes; e.g. labels, CAPTCHA adapters, etc. The annotation expects an associative array: @Options({"label":
  • Required: indicate whether an element is required. A boolean value is expected. By default, all elements are
    required, so this annotation is mainly present to allow disabling a requirement.
  • Type: indicate the class to use for the current element, fieldset, or form. A string value is expected.
  • Validator: provide a specification for a validator to use on a given element. Expects an associative array of
    values, with a “name” key pointing to a string validator name, and an “options” key pointing to an associative
    array of validator options for the constructor: @Validator({"name": "StringLength", "options": {"min":3, "max":
    . This annotation may be specified multiple times.

Unique record validation in ZF2 forms

Controller code:

Validator class:

Configure Sphinx Search server with a main + delta indexing scheme, including updates & deletes

Sphinx Search is an OpenSource FULL TEXT search server developed in C++, and it is a very fast and scalable solution, superior to what database servers offer. It works on all major operating systems, but in this example, I will show you how to install and configure  it in Linux, which is the most common choice.

The datasource will be a MySQL database.


Installing is simple. You can download the sources, and use the standard procedure (configure and make). If you are using CentOS, you can download the latest RPM and install it like this:

rpm -ihv <the-URL-of-RPM-from-sphinx-website>

CentOS usually has an old version with the official yum repo, so downloading the latest version would be needed, because new cool features are always added.

if it complains about missing libraries, like odbc, use yum to locate them.


If you used rpm to install, the configuration file is located at /etc/sphinx/sphinx.conf

Sample config:


As you can see I used a table named ads.

You need to create two tables for sphinx:

  1. sphinx_ads_deleted – Will contain deleted items from ads. The deleted items are inserted for the DELETE trigger in ads
  2. sphinx_counter – Will contain the updated last id and modification date since the last reindex

You need to define a DELETE trigger found bellow in the ads table.

I will include the structure for my ads table also.

Useful commands

start/stop/restart service:
service searchd restart
indexer –rotate ads_main

rotate will update index named ads_main even if it is in use

Cron jobs (update schedule)

Usually the main index is rebuilt once a day, and the delta updates more frequently.

Make sure the crond service is running with:
service crond status
It should say the service is running.

Create a file for each job in /etc/cron.d

sphinx_main – runs at 2:12 AM each night
sphinx_delta – runs at every minute

Faster updates ?

1. Use merge

Instead of reindexing main, you could merge delta into main. This still consumes a lot of memory, but it would be faster.

The basic command syntax is as follows:

So you will have something like this:

The problem is that you can’t use the shell for this because you will need to update the sphinx_counter table also, and that is why you will need to do this from a script.

I prefer to rebuild the index each night to make sure I am using a synchronized version of the database.
A full re index for a 100K records table takes only a few seconds.

2. Use Real TIme indexes for live updates

Real Time indexes were introduced with version version 1.10-beta. Updates on a RT index can appear in the search results in 1-2 milliseconds, ie. 0.001-0.002 seconds. However, RT index are less efficient for bulk indexing huge amounts of data.

HTML Cleaner

How many of you needed to clean up those messy MS Word files in order to integrate them into valid W3C pages, or just integrate them in the overall design ?
I’ve looked for a good HTML Cleaner and didn’t find a good free one.

Meanwhile, I’ve developed my own HTML Cleaner class in PHP, because I needed to clean up tons of word generated code in that time.

I’ve combined the strong HTML Tidy library with my own regular expression-based cleaning algorithms. I wanted a simple method to strip all unnecessary tags and styles yet to keep it W3C standard compliant.

Syntax checking is being done only when using Tidy.
Note that this tool is designed to strip/clean useless tags and attributes back to HTML basics and optimize code, not sanitize (like HTMLPurifier).

Without the tidy PHP extension, the class can:
– remove styles, attributes
– strip useless tags
– fill empty table cells with non-breaking spaces
– optimize code (merge inline tags, strip empty inline tags, trim excess new lines)
– drop empty paragraphs
– compress (trim space and new-line breaks).

In conjunction with tidy, the class can apply all tidy actions (clean-up, fix errors, convert to XHTML, etc) and then optionally perform all actions of the class (remove styles, compress, etc).

Currently the following cleaning method is implemented: tag whitelist/attribute blacklist

See it in action:
Download latest version

Licenced under Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported (http://creativecommons.org/licenses/by-nc-sa/3.0/)
for personal, non-commercial use

For commercial use one developer licence costs 15 EUROs

-taken from RC6

v. 1.0 RC6
-added option to apply tidy before internal cleanup
-added function TidyClean() that cleans only with Tidy the source from html, modifying it
-changed license to Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported

v. 1.0 RC5
-tidy cleanup works also with PHP 4.3 now. Correction: class is compatible with PHP >=4.3. PHP 5 recommended. Basic cleanup (no tidy) can work with earlier versions of PHP 4
-removed drop-empty-paras option from default tidy config since there is already an internal drop-empty-paras mechanism
-Optimize now defaults to true since is very useful
-new default tidy config options:
‘preserve-entities’ => true, // preserve the well-formed entities as found in the input (to display correctly some chars)
‘quote-ampersand’ => true,//output unadorned & characters as & (as required by W3C)
-default Encoding set to latin1

v. 1.0 RC4
-the class is now compatible with PHP 4.4 or higher (maybe 4.0, but never tested)
-minor bugfix for Optimize (loop until optimized now works correctly)

v. 1.0 RC3
-cleaning is now done case insensitive
-improved optimize, removed EXPERIMENTAL tag
-default tidy config now sets word-2000 to false