Test Driven Development

I've been away for a while and forgotten a scary amount of Java in the meantime, which goes to show that you really do need to hack code pretty much daily to really keep in with a language.

Anyway, while I was away I was thinking about Test Driven Development. The theory is simple: Write some tests and develop code that passes those tests.
So I got to thinking, surely the obvious step on from this is: Write some tests and press a button so that the computer generates the necessary code that passes the tests.

Now, anyone that's met me knows I a brilliant knack for inventing things that others have already invented years ago.
So my questions are:
a) Has work already been done in this area?
b) Is this something Idea could do?

I can already think of numerous pros and cons arising from such a development methodology but I'd be interested to hear what others think.

Vince.

0
21 comments

Vincent O'Sullivan wrote:

Write some tests and press a button so that the computer generates the necessary code that passes the tests.

Now, anyone that's met me knows I a brilliant knack for inventing things that others have already invented years ago.
So my questions are:
a) Has work already been done in this area?
b) Is this something Idea could do?


It sounds to me like Declarative Programming and more vaguely like MDA.
Bot of them vaporware IMHO.

You might run a search in google to see how are they doing.

0

There's been a fair amount of research over the years on automatically
generating programs from specifications. AFAICT there are two different
approaches people have taken. The first uses some kind of formal
specification language like Z; the recurring problem with these is that it
generally turns out to be harder to write the specification than the program
(and don't even think about maintenance...). The second tries to use some
less formal type of specification; the problem here is that there is not
enough information in the spec to generate the application.

This idea seems to fall into the second camp. The problem is that a test or
set of tests is not necessarily a complete specification of the problem.
Consider this, for example:

public void testFunc() {
assertEquals(0, func(0));
assertEquals(1, func(1));
assertEquals(-1, func(-1));
}

Now this could lead to a function which looked like this:

public int func(int a) {
return a;
}

or this:

public int func(int a) {
return (a * a * a);
}

or this:

public int func(int a) {
if (a == 0) {
return 0;
} else if (a == 1) {
return 1;
} else {
return -1;
}
}

Obviously, none of these functions are equivalent. You could argue that the
test was poor because it was very incomplete, but that's the point. There
are only two ways I can think of to have a complete enough test to reliably
generate a function from:
- specify every single input and output in the test; or
- implement the function as part of the test.
Either way is going to be a more work than just writing the function
yourself (a LOT more work for the former) AND you've given up one of the big
advantages of unit tests: simplicity.

Just my 2p...

Vil.


Vincent O'Sullivan wrote:

I've been away for a while and forgotten a scary amount of Java in the meantime, which goes to show that you really do need to hack code pretty much daily to really keep in with a language.

Anyway, while I was away I was thinking about Test Driven Development. The theory is simple: Write some tests and develop code that passes those tests.
So I got to thinking, surely the obvious step on from this is: Write some tests and press a button so that the computer generates the necessary code that passes the tests.

Now, anyone that's met me knows I a brilliant knack for inventing things that others have already invented years ago.
So my questions are:
a) Has work already been done in this area?
b) Is this something Idea could do?

I can already think of numerous pros and cons arising from such a development methodology but I'd be interested to hear what others think.

Vince.


--
Vilya Harvey
vilya.harvey@digitalsteps.com / digital steps /
(W) +44 (0)1483 469 480
(M) +44 (0)7816 678 457 http://www.digitalsteps.com/

Disclaimer

This e-mail and any attachments may be confidential and/or legally
privileged. If you have received this email and you are not a named
addressee, please inform the sender at Digital Steps Ltd by phone on
+44 (0)1483 469 480 or by reply email and then delete the email from
your system. If you are not a named addressee you must not use,
disclose, distribute, copy, print or rely on this email. Although
Digital Steps Ltd routinely screens for viruses, addressees should
check this email and any attachments for viruses. Digital Steps Ltd
makes no representation or warranty as to the absence of viruses in this
email or any attachments.

0

Vil,

That was a very interesting insight into the matter. And I completely agree with the conclusion.

Vlad

0

There's been a fair amount of research over the years...
There
are only two ways I can think of to have a complete
enough test to reliably
generate a function from:
- specify every single input and output in the test;
or
- implement the function as part of the test.
Either way is going to be a more work than just
writing the function
yourself (a LOT more work for the former) AND you've
given up one of the big
advantages of unit tests: simplicity.


mmm. Good answer. Even a cursory search of the Internet throws up the general consensus that the solution is hairier than the problem. I guess it can be left for the Tefal-heads to sort out.

Vince.

0

This discussion made me think of something which would be very cool.

I think generating the actual code logic would be impossible to do. And IMHO extremely dangerous. The reason for having tests in the first place is so that the developer feels more confident about his code. Doing test first and then having the tool generate your code will give you code that will pass all tests so you'll feel "confident" that it's working right when it might not be because the generation actully somehow misunderstood something. So what would essentially be accomplished by this would be moving the source of the problem (the actual problem solving... thinking about the logic of the program) from one step (the programming) to another (the test writing). So again: Why do we have tests in the first place? To give us confidence in our own logical thinking! So what will we eventually need if we move our logical thinking to the part of writing tests? We'll need something else to give us confidence in our logical thinking (a test for the test).

But anyways... this gave me an idea for the next release of IDEA: Some sort of a wizard that will generate the skeleton of your code from the tests. In a way this has already been done because if you write a test and have:

and you haven't created the Foo class, IDEA will eventually ask you if you want to create it, then if you want to add the method bar() to it and so on.

What I would like to see would just be an extension to that where IDEA would generate the skeletons for classes and methods but would do it in a "one step" approach. It's a bit tedious to always click that lightbulb and select "Create ...". Ideally IDEA should also allow you to write those tests in a way which the editor wouldn't always nag you about the classes and methods that don't exist... sort of a "Test-First edit mode". At the end of writing the tests you would select "Generate code skeleton from tests" and it would create these things without asking you all the time about each change.

0

Stefan, I like the thinking you make there up north.
I love the "Test writing" mode (or client writing in general).

But what really clicked me was the view that "testing is to make
sure developer did things right". I'd like to extend that to the
very opposite extreme of the original request:

- generate the methods so that the default implementation DOESN'T
return the expected values and the developer cannot forget
to look at them and think about the right implementation.

At the same time I must say this is just a mental exercise for me
so don't take me too seriously here. I don't believe in "smart code
generation" - that's where this discussion leads to. I like, however,
"dumb code generation" - easy things that can be easily determined
automatically and are annoying to type manually (e.g. method entry/exit
logging, permission checks, etc.).

r.

Stefan Freyr Stefansson wrote:

This discussion made me think of something which would be very cool.

I think generating the actual code logic would be impossible to do. And IMHO
extremely dangerous. The reason for having tests in the first place is so
that the developer feels more confident about his code. Doing test first and
then having the tool generate your code will give you code that _will pass
all tests_ so you'll feel "confident" that it's working right when it might
not be because the generation actully somehow misunderstood something. So
what would essentially be accomplished by this would be moving the source of
the problem (the actual problem solving... thinking about the logic of the
program) from one step (the programming) to another (the test writing). So
again: Why do we have tests in the first place? To give us confidence in our
own logical thinking! So what will we eventually need if we move our logical
thinking to the part of writing tests? We'll need something else to give us
confidence in our logical thinking (a test for the test).

But anyways... this gave me an idea for the next release of IDEA: Some sort
of a wizard that will generate the skeleton of your code from the tests. In
a way this has already been done because if you write a test and have:


and
you haven't created the Foo class, IDEA will eventually ask you if you want
to create it, then if you want to add the method bar() to it and so on.

What I would like to see would just be an extension to that where IDEA would
generate the skeletons for classes and methods but would do it in a "one
step" approach. It's a bit tedious to always click that lightbulb and select
"Create ...". Ideally IDEA should also allow you to write those tests in a
way which the editor wouldn't always nag you about the classes and methods
that don't exist... sort of a "Test-First edit mode". At the end of writing
the tests you would select "Generate code skeleton from tests" and it would
create these things without asking you all the time about each change.


0

What I would like more is background run of the test for the given
method and displaying an indicator in the status bar - green checkmark
"everything is ok" and red exclamation mark with failure message whish
appears as tooltip.

Note that some test methods need pretty long time to run, and could take
a lot of resources, so I envision the functionality like this:

1. setup a AutoTestTarget for the given method similar to
the run targets. It should include the name of the method
and a bunch of JUnit test methods.

2. after IDEA finishes with syntax/inspection checking, it
should try to compile a current snapshot the source and run
the tests in very low priority thread.

3. display the assertion/exception in the status bar with
explanation tooltip and navigate to the code on doubleclick.


what do you think about this?

-- dimiter

0

Stefan Freyr Stefansson wrote:

This discussion made me think of something which would be very cool.

I think generating the actual code logic would be impossible to do. And IMHO
extremely dangerous. The reason for having tests in the first place is so
that the developer feels more confident about his code. Doing test first and
then having the tool generate your code will give you code that _will pass
all tests_ so you'll feel "confident" that it's working right when it might
not be because the generation actully somehow misunderstood something. So
what would essentially be accomplished by this would be moving the source of
the problem (the actual problem solving... thinking about the logic of the
program) from one step (the programming) to another (the test writing). So
again: Why do we have tests in the first place? To give us confidence in our
own logical thinking! So what will we eventually need if we move our logical
thinking to the part of writing tests? We'll need something else to give us
confidence in our logical thinking (a test for the test).


I think you are missing the spirit and intent of TDD. The best description of
it I have seen and its practical playout is a series done in Software
Development magazine (http://www.sdmagazine.com). Its a really good read.

THe basic practices are:

  • There should never be a change to code until a test case exists that mandates

its change.

  • You should try to make the test fail before you try to make it pass.

(generating code that passes is more dangerous than generating test cases
due to the false sense of security you get).

  • If the test case won't fail, you code already supports the new feature

and nothing needs to be done.

  • Refactor mercilessly. As time goes by, you can develop some cruft--you need

to clean up your code while still passing all the test cases. That will help
you guarantee that your refactoring did not break anything.

  • Always do the simplest thing to make the test pass. Over time, what was

simple at the beginning is not simple now, see the last point.

As such, both the process of writing the test case and altering/writing the
code are essential to TDD.

Now, the level of support that is useful from IDEA already exists: programming
by intention. You write the test case, knowing that a particular method does
not exist, or does not have the proper signature. We can write the test
according to how we want to use the code, and IDEA will suggest the steps
that will be needed to make it compile. That is enough.

0

dimiter wrote:

What I would like more is background run of the test for the given
method and displaying an indicator in the status bar - green checkmark
"everything is ok" and red exclamation mark with failure message whish
appears as tooltip.

Note that some test methods need pretty long time to run, and could take
a lot of resources, so I envision the functionality like this:

1. setup a AutoTestTarget for the given method similar to
the run targets. It should include the name of the method
and a bunch of JUnit test methods.

2. after IDEA finishes with syntax/inspection checking, it
should try to compile a current snapshot the source and run
the tests in very low priority thread.

3. display the assertion/exception in the status bar with
explanation tooltip and navigate to the code on doubleclick.


what do you think about this?


Hmm. So would IDEA flag all the tests that need to be run because of
the code change? For example, in some cases a method will be called
in 20 tests just because it is some necessary step to set up for the
real test. As such, it is really important for that to pass in all
cases. If IDEA compiled the code and reran all affected tests, then
we would have the seal that our change did not accidentally break
something later on in the chain. It would also give us an idea of how
fragile the code is or not--signifying what would need to be made more
robust.

The important thing to realize here is what you brought up: tests take
a long time to run. If they are run in the background, then we need
the icon on the side to show the following statii:

- No icon: no tests for the method
- Yellow icon: tests are running
- Red icon: last run of tests failed, double-clicking will open the test
failures in the doc below.
- Green icon: last run of tests passed.

0

I think you are missing the spirit and intent of TDD.


(Naturally) I don't agree with you ;o)

THe basic practices are:

  • There should never be a change to code until a test

case exists that mandates
its change.


Agreed.

  • You should try to make the test fail before

you try to make it pass.
(generating code that passes is more dangerous
us than generating test cases
due to the false sense of security you get).


Emm... This wouldn't be exactly true... You shouldn't try to make it fail... the basic idea is that if you write the test first then it will fail the first time because you haven't written anything to implement what it is that the test case is testing. When you start to write your method that will eventually make the test run, of course you'll do your best to make it do things right!
"generating code that passes is more dangerous us than generating test cases due to the false sense of security you get"
I wasn't talking about generating code that passes... I was talking about generating that tedious stuff that bores the life out of us... such as classes and method declarations.

  • If the test case won't fail, you code *already

supports the new feature*
and nothing needs to be done.


Agreed.

  • Refactor mercilessly. As time goes by, you can

develop some cruft--you need
to clean up your code while still passing all the
he test cases. That will help
you guarantee that your refactoring did not break
ak anything.


Agreed.

  • Always do the simplest thing to make the test pass.

Over time, what was
simple at the beginning is not simple now, see the
he last point.


Why??? Why can't what was simple still be simple??? That's the beauty of TDD... it makes you concentrate on one problem at a time. This will in turn usually make developers come up with the simplest solution to the problem... and being simple, it's easier for the developer to see any potential problem with the code and fixing that (of course, in a simple way).

As such, both the process of writing the test case
and altering/writing the
code are essential to TDD.


I agree... my suggestion wasn't to have IDEA write the code... on the contrary... I was actually saying that the developer must write the code... both to the tests and the implementation to make the tests run. What I suggested was that IDEA made it easier to do so by writing the "framework" around the logic that you are going to be cooking up. I mean, IDEA is fully capable of producing the signitures of methods, generating java source files and putting them in the right package and so forth... tasks that would simply be boring for the developer to do.

Now, the level of support that is useful from IDEA
already exists: programming
by intention. You write the test case, knowing that
a particular method does
not exist, or does not have the proper signature. We
can write the test
according to how we want to use the code, and
IDEA will suggest the steps
that will be needed to make it compile. That
is enough.


That was what I said!!! The only thing I suggested that IDEA would do in addition was provide a way to do this more expeditiously by not requiring the user to have to click that blessed lightbulb thing each and every time that it thinks it might be able to help you. That and provide a way to write the tests without constantly nagging that this and that class don't exist.

I think you must have misunderstood something in my post because I see nothing that contradicts with your oppinions!?

Kind regards, Stefan Freyr.

0

Stefan Freyr Stefansson wrote:

]]>

>> * Always do the simplest thing to make the test pass. Over time, what was
>> simple at the beginning is not simple now, see the he last point.


Why??? Why can't what was simple still be simple??? That's the beauty of
TDD... it makes you concentrate on one problem at a time. This will in turn
usually make developers come up with the simplest solution to the problem...
and being simple, it's easier for the developer to see any potential problem
with the code and fixing that (of course, in a simple way).


A simple case and point for your question:

We start out with a simple if/else block because that was the simplest
thing at the time. The options for this section of code were limited. At
a later time, we might have some more options available, so a switch statement
would be the simplest to understand and simplest to maintain. As time goes
by, even the switch statement can become unweildy, and we find that using the
full State pattern (i.e. state objects that take advantage of polymorphism for
an action) becomes the best option.

The term "simple" is relative, just like everything else in the software
development world. WHen you factor code maintenance and understandability,
some solutions work well at a certain scale, but not so well as the scale
increases. With the progression of scale of the FSM above, we start with
the if/else block because there is only two states. Simple to read, simple
understand, simple to maintain. As soon as you start with if/else if/else
you lose understandability and maintenance quickly becomes more troublesome.
The if/else block is no longer the simplest thing because the requirements
have changed.

If all requirements were static, TDD would be unnecessary, and full blown
CMM with heavy processes would win out. Unfortunately we live in an ever
changing world.

Complete with that ever changing world, we learn new things, or new language
features become available that simplify things. It is very likely that you
learned a few things after five years of development, and have learned better
ways of doing things that you didn't know back then. The new ways are truly
easier to understand and manage. It is also very likely that new language
or API features will be able to simplify code you wrote five years ago, or
one year ago, or even last month. Again the features either hide complexity
that you don't need to see, or they enable a new way of doing something
that is completely different.

Getting back to your comment about concentrating on one problem at a time,
I agree with you in principle. However, if you never stop to get a big
picture overview, you will find that you will never learn from your mistakes.
I believe in a three phase approach to developing a new feature/requirement:
test/develop/cleanup. As you add new features, you will inevitably run into
some dead-end solutions (unless you are omniscient, I haven't been able to
achieve this yet though). If you are not careful, you will leave remnants of
your dead-end solutions in the code. Also, the cleanup phase typically puts
you into the "big picture" mode, so you can learn from what worked and what
didn't.

But now I am dribbling on...

0

Stefan Freyr Stefansson wrote:

>

  • Always do the simplest thing to make the test pass.

Over time, what was


This prerequisite has always has always worried me. I know, from experience, that the simplest thing to pass the initial tests is frequently something that is not robust or scaleable. I often have tests (and I admit it may be my tests that are at fault) where the 'simplest' way to code something that passes the test is simply to expose a class variable. Of couse this is 'evil' so instead I code an accessor in the usual Java way. (Mind you these have now been declared evil, too. See "Why getters and setters are evil" http://www.javaworld.com/javaworld/jw-09-2003/jw-0905-toolbox.html).

So my variation is to add a flier to this condition:
Always do the simplest thing to make the test pass but remember 'simpler' != 'most simplistic'.

Vince

0

Vincent O'Sullivan wrote:
>>Stefan Freyr Stefansson wrote:
>>
>>* Always do the simplest thing to make the test pass.
>> Over time, what was


This prerequisite has always has always worried me. I know, from experience, that the simplest thing to pass the initial tests is frequently something that is not robust or scaleable. I often have tests (and I admit it may be my tests that are at fault) where the 'simplest' way to code something that passes the test is simply to expose a class variable. Of couse this is 'evil' so instead I code an accessor in the usual Java way. (Mind you these have now been declared evil, too. See "Why getters and setters are evil" http://www.javaworld.com/javaworld/jw-09-2003/jw-0905-toolbox.html).

So my variation is to add a flier to this condition:
Always do the simplest thing to make the test pass but remember 'simpler' != 'most simplistic'.


Right. Hopefully my response to Stefan Stefansson clarifies really what that is
talking about. Really, the whole principle is designed to get people out of the
habit of over-engineering something. The basic understanding is just because a
new requirement may come down the pike does not necessarily mean that it
will. Furthermore, we are most efficient when we only have to worry
about what is needed right now.

Nine times out of ten, you don't have enough information to properly anticipate
the implications of the future requirement--so don't worry about it until it is
time.

0

Vincent O'Sullivan wrote:
> This prerequisite has always has always worried me. I know, from
> experience, that the simplest thing to pass the initial
> tests is frequently something that is not robust or scaleable.

Why should it be robust, or scalable, if no test requires it?

Alain

0

Why should it be robust, or scalable, if no test requires it?


In many cases the input is an infinite (or a very large) set, this
leading to testing being done on some standard and border cases of the
input/output set are tested.

There are cases in which testing can't cover everything, as for example
string.concat(string). You can only prove that it works for
null/empty/single character/some other example strings.


0

If a JetBrainer is reading this thead,
I'd like to know - a wild guess is enough - how much in % of IDEA's code
base is not covered by automated tests.
(I don't count the EAP members as automated tests)


Carlos Costa e Silva wrote:

> Alain Ravet wrote:
>>Why should it be robust, or scalable, if no test requires it?
..
> There are cases in which testing can't cover everything,

You don't need to test everything, you need to test enough.
As do pharmaceutical companies, for new drugs : they test only on a
sample of the population.

If you don't test, how do you prove your paying customer that you
actually delivered the goods, and didn't spend the money on booze instead?


Alain

0


"Alain Ravet" <alain.ravet.list@wanadoo.be> wrote in message
news:bl1hee$5g3$1@is.intellij.net...

Vincent O'Sullivan wrote:
> This prerequisite has always has always worried me. I know, from
> experience, that the simplest thing to pass the initial
> tests is frequently something that is not robust or scaleable.

>

Why should it be robust, or scalable, if no test requires it?



Carlos Costa e Silva wrote:

>

> Alain Ravet wrote:
>>Why should it be robust, or scalable, if no test requires it?
..
> There are cases in which testing can't cover everything,

>

You don't need to test everything, you need to test enough.


My point is that there's a contradiction here.

The typing monkeys can eventually produce code that passes all the tests. I
prefer someone thinking about producing robust code instead of someone that
just writes code that passes tests ;)

Note that I'm not advocating YAGNI programming. I do too much of that
already.

If you don't test, how do you prove your paying customer that you
actually delivered the goods,


The customer doesn't even know that unit tests exists, he cares about code
that does what he needs doing . Customer acceptance trials are what counts
here.

and didn't spend the money on booze instead?


And lately, the customer must be thinking I'm spending his (yet unpaid)
money on booze, as code has been somewhat slow coming out due among others
to too many intellij watching :)

Carlos


0


> "Alain Ravet" wrote :
>>You don't need to test everything, you need to test enough.

Carlos Costa e Silva wrote:
> ..I prefer someone thinking about producing robust code instead
> of someone that just writes code that passes tests ;)

I rather have someone proving me, with tests, beyond a reasonable doubt,
that the code is robust and scalable, than someone telling me he is
confident that the code is robust and scalable, because he read it
carefully, and applied all the recipes he found in a book.


>>If you don't test, how do you prove your paying customer that you
>>actually delivered the goods,
>
> The customer doesn't even know that unit tests exists, he cares
about code
> that does what he needs doing . Customer acceptance trials are what
counts
> here.

I see no problem for a programmer to use acceptance tests to help him
develop robust and scalable code. There is no rule that prevent sharing
a test, between the programmers and customer test suite.


My point is : if you can't write an automated test for feature X, stop,
and think further. Rinse and repeat, till you find a way to test at
least a bit. Think further, and try to test more. If you still can't, I
feel your pain. Some things are (very) difficult to test, but I'm
enclined to think they are rare.


My point is : you don't need to test everything, you need to test
enough. People build parsers Test-First.
I understand the necessity of formal proofs for the space shuttle
software, but for the rest of the world, people can't afford the
enormous cost per line of this process.

Rinse, and repeat.

Alain

0


"Alain Ravet" <alain.ravet.list@wanadoo.be> wrote in message
news:bl3l7q$f35$2@is.intellij.net...
>

Carlos Costa e Silva wrote:
> ..I prefer someone thinking about producing robust code instead
> of someone that just writes code that passes tests ;)

>

I rather have someone proving me, with tests, beyond a reasonable doubt,
that the code is robust and scalable, than someone telling me he is
confident that the code is robust and scalable, because he read it
carefully, and applied all the recipes he found in a book.


Alain, no one here questioned test first (or after) development.

What I question is this post::

"Alain Ravet" <alain.ravet.list@wanadoo.be> wrote in message
news:bl1hee$5g3$1@is.intellij.net...

Vincent O'Sullivan wrote:
> This prerequisite has always has always worried me. I know, from
> experience, that the simplest thing to pass the initial
> tests is frequently something that is not robust or scaleable.

>

Why should it be robust, or scalable, if no test requires it?


I will write only this:

Where is it guranteed that the tests are complete and without faults? Where
are the tests that test the tests?


0

Carlos Costa e Silva wrote:
> "Alain Ravet" wrote
>> Vincent O'Sullivan wrote:
>> > This prerequisite has always has always worried me. ..
>>Why should it be robust, or scalable, if no test requires it?
>
> I will write only this:
> Where is it guranteed that the tests are complete and
> without faults? Where are the tests that test the tests?


Carlos,

It all started when Vincent O wrote that the absence of up-front
design would frequently bring to a weak, un-robust and not scalable
solution. I violently disagree, and don't understand how such a remark
can be posted in a forum that's dedicated to a refactoring tool.

My question
>>Why should it be robust, or scalable, if no test requires it?

was a short way of asking :

Q/ Why do you want a system to be robust and scalable?

A/ Because the clients required it.

Q/ How do you know you built a robust and scalable system?

A/ When the tests say so.


Hopefully, 100% of the tests are automated.
Less is painful.
Even Less is risky.

> I will write only this:
> Where is it guranteed that the tests are complete and
> without faults? Where are the tests that test the tests?

I can only answer :

Q/ How do you know you have finished.

A/ When all the tests pass.

, whether they are automated, or not.


Alain

0

Please sign in to leave a comment.