Tests “scoped” to implementation


Jakub Milkiewicz
 

I had a live coding interview yesterday and I faced interesting ( at least for me) issue. 
A small background: I was about to implement a function  to figure out if sum of any 2 elements in input array is equal to given number. Example 
Input.               Output 

[1,2] and 3     true 
[1,2,3] and 5       true 
[1,2,3] and 4         true 
[1,2,3] and 6        False 


As I was constrainted by time (~ 25 mins) I started with TDD but decided to skip most of micro steps.  In the end I implemented something pretty naive ( with O(n^2) complexity - comparing sum of all possible pairs) but it wasn’t highly welcome by interviewer. 
Moreover my interviewer wanted  me to add extra test cases  ( beside the ones which brought me to my solution - as shown above ) just in case when” in future you want to refactor an existing solution to something more sophisticated”. I strongly refused as these tests will not make any sense from TDD point of view: they will all immediately pass. 

Do you believe that adding extra tests cases “for future refactoring” makes sense ? 
I can imagine that for a particular solution of this task ( algorithm being sorting input list + using 2 pointers) If I go strictly with TDD ( a new test case must first fail) a new solution would ( but does t need to ) require a different test cases...
What do you think ?
Is it possible that TDD is not a good fit for strong “algorithmic tasks” ?

Br JM 
--


Ron Jeffries
 

I don't see anything wrong with adding tests to confirm things. A common case is when someone says "I think that breaks when ...". The ideal response, in my view, is just to add the test and run them. Let the computer decide.

As for adding tests in case of refactoring, that seems speculative to me and therefore probably wasteful. In an interview situation, of course, it's a question of how badly one wants the job offer. 

But there's nothing wrong with writing an additional test in response to a concern or question or doubt.

R

On Jan 16, 2020, at 6:55 AM, Jakub Milkiewicz <jmilkiewicz@...> wrote:

I had a live coding interview yesterday and I faced interesting ( at least for me) issue. 
A small background: I was about to implement a function  to figure out if sum of any 2 elements in input array is equal to given number. Example 
Input.               Output 

[1,2] and 3     true 
[1,2,3] and 5       true 
[1,2,3] and 4         true 
[1,2,3] and 6        False 


As I was constrainted by time (~ 25 mins) I started with TDD but decided to skip most of micro steps.  In the end I implemented something pretty naive ( with O(n^2) complexity - comparing sum of all possible pairs) but it wasn’t highly welcome by interviewer. 
Moreover my interviewer wanted  me to add extra test cases  ( beside the ones which brought me to my solution - as shown above ) just in case when” in future you want to refactor an existing solution to something more sophisticated”. I strongly refused as these tests will not make any sense from TDD point of view: they will all immediately pass. 

Do you believe that adding extra tests cases “for future refactoring” makes sense ? 
I can imagine that for a particular solution of this task ( algorithm being sorting input list + using 2 pointers) If I go strictly with TDD ( a new test case must first fail) a new solution would ( but does t need to ) require a different test cases...
What do you think ?
Is it possible that TDD is not a good fit for strong “algorithmic tasks” ?


Ron Jeffries
It was important to me that we accumulate the learnings about the application over time by modifying the program to look as if we had known what we were doing all along.
-- Ward Cunningham


Jakub Milkiewicz
 

Thx Ron. Actually I added more tests as requested by interviewer but at the moment I saw then all passing I deleted them as I consider them as a noise and redundant code ( given I stick to my solution) 

P.s I didn’t get the job as my rejection  to add more ( wasteful ) tests made me “hard to manage” in his eyes 

On Thu 16. Jan 2020 at 13:14, Ron Jeffries <ronjeffriesacm@...> wrote:
I don't see anything wrong with adding tests to confirm things. A common case is when someone says "I think that breaks when ...". The ideal response, in my view, is just to add the test and run them. Let the computer decide.

As for adding tests in case of refactoring, that seems speculative to me and therefore probably wasteful. In an interview situation, of course, it's a question of how badly one wants the job offer. 

But there's nothing wrong with writing an additional test in response to a concern or question or doubt.

R

On Jan 16, 2020, at 6:55 AM, Jakub Milkiewicz <jmilkiewicz@...> wrote:

I had a live coding interview yesterday and I faced interesting ( at least for me) issue. 
A small background: I was about to implement a function  to figure out if sum of any 2 elements in input array is equal to given number. Example 
Input.               Output 

[1,2] and 3     true 
[1,2,3] and 5       true 
[1,2,3] and 4         true 
[1,2,3] and 6        False 


As I was constrainted by time (~ 25 mins) I started with TDD but decided to skip most of micro steps.  In the end I implemented something pretty naive ( with O(n^2) complexity - comparing sum of all possible pairs) but it wasn’t highly welcome by interviewer. 
Moreover my interviewer wanted  me to add extra test cases  ( beside the ones which brought me to my solution - as shown above ) just in case when” in future you want to refactor an existing solution to something more sophisticated”. I strongly refused as these tests will not make any sense from TDD point of view: they will all immediately pass. 

Do you believe that adding extra tests cases “for future refactoring” makes sense ? 
I can imagine that for a particular solution of this task ( algorithm being sorting input list + using 2 pointers) If I go strictly with TDD ( a new test case must first fail) a new solution would ( but does t need to ) require a different test cases...
What do you think ?
Is it possible that TDD is not a good fit for strong “algorithmic tasks” ?


Ron Jeffries
It was important to me that we accumulate the learnings about the application over time by modifying the program to look as if we had known what we were doing all along.
-- Ward Cunningham

--


Rene Wiersma
 

Imagine the following inputs, and expected outputs:

(true, true) should return 0

(true, false) should return 1

(false, false) should return 2

(false, true) should return 2


Purely from a TDD point of view it is not necessary to write a test for the last case, as we would not have to add or change any existing code to make it pass. However, in this case I would add the test for completeness sake, and future reference.

If the number of permutations of input parameters is so large that testing them all becomes impractical, then it might make sense to use property-based testing, or some other suitable technique, for it. This does not mean TDD doesn't apply here. You can still start with straight-forward TDD, then refactor to property-based tests when it becomes apparent that it is better suited for the situation.


- Rene Wiersma


Op 16 januari 2020 om 12:55 schreef Jakub Milkiewicz <jmilkiewicz@...>:

I had a live coding interview yesterday and I faced interesting ( at least for me) issue. 
A small background: I was about to implement a function  to figure out if sum of any 2 elements in input array is equal to given number. Example 
Input.               Output 

[1,2] and 3     true 
[1,2,3] and 5       true 
[1,2,3] and 4         true 
[1,2,3] and 6        False 


As I was constrainted by time (~ 25 mins) I started with TDD but decided to skip most of micro steps.  In the end I implemented something pretty naive ( with O(n^2) complexity - comparing sum of all possible pairs) but it wasn’t highly welcome by interviewer. 
Moreover my interviewer wanted  me to add extra test cases  ( beside the ones which brought me to my solution - as shown above ) just in case when” in future you want to refactor an existing solution to something more sophisticated”. I strongly refused as these tests will not make any sense from TDD point of view: they will all immediately pass. 

Do you believe that adding extra tests cases “for future refactoring” makes sense ? 
I can imagine that for a particular solution of this task ( algorithm being sorting input list + using 2 pointers) If I go strictly with TDD ( a new test case must first fail) a new solution would ( but does t need to ) require a different test cases...
What do you think ?
Is it possible that TDD is not a good fit for strong “algorithmic tasks” ?

Br JM 
--
 


Ron Jeffries
 

I'd add the test because it would require reasoning about the specific code to decide whether it would (probably) run.

Suppose we implemented it with a hash table, for example :)

R

On Jan 16, 2020, at 7:25 AM, Rene Wiersma via Groups.Io <R.wiersma@...> wrote:

Purely from a TDD point of view it is not necessary to write a test for the last case, as we would not have to add or change any existing code to make it pass. However, in this case I would add the test for completeness sake, and future reference.


Ron Jeffries
The seemingly easy way of learning — by asking — is not necessarily the best.
When you eventually understand, you will understand fully.
— Dragon
   The Line War
   (Neal Asher)


Russell Gold
 

I don’t understand the idea that they are noise. If they confirm desired behavior, having them seems a plus. So why would you feel a need to remove them?

On Jan 16, 2020, at 7:22 AM, Jakub Milkiewicz <jmilkiewicz@...> wrote:

Thx Ron. Actually I added more tests as requested by interviewer but at the moment I saw then all passing I deleted them as I consider them as a noise and redundant code ( given I stick to my solution) 

P.s I didn’t get the job as my rejection  to add more ( wasteful ) tests made me “hard to manage” in his eyes 

On Thu 16. Jan 2020 at 13:14, Ron Jeffries <ronjeffriesacm@...> wrote:
I don't see anything wrong with adding tests to confirm things. A common case is when someone says "I think that breaks when ...". The ideal response, in my view, is just to add the test and run them. Let the computer decide.

As for adding tests in case of refactoring, that seems speculative to me and therefore probably wasteful. In an interview situation, of course, it's a question of how badly one wants the job offer. 

But there's nothing wrong with writing an additional test in response to a concern or question or doubt.

R

On Jan 16, 2020, at 6:55 AM, Jakub Milkiewicz <jmilkiewicz@...> wrote:

I had a live coding interview yesterday and I faced interesting ( at least for me) issue. 
A small background: I was about to implement a function  to figure out if sum of any 2 elements in input array is equal to given number. Example 
Input.               Output 

[1,2] and 3     true 
[1,2,3] and 5       true 
[1,2,3] and 4         true 
[1,2,3] and 6        False 


As I was constrainted by time (~ 25 mins) I started with TDD but decided to skip most of micro steps.  In the end I implemented something pretty naive ( with O(n^2) complexity - comparing sum of all possible pairs) but it wasn’t highly welcome by interviewer. 
Moreover my interviewer wanted  me to add extra test cases  ( beside the ones which brought me to my solution - as shown above ) just in case when” in future you want to refactor an existing solution to something more sophisticated”. I strongly refused as these tests will not make any sense from TDD point of view: they will all immediately pass. 

Do you believe that adding extra tests cases “for future refactoring” makes sense ? 
I can imagine that for a particular solution of this task ( algorithm being sorting input list + using 2 pointers) If I go strictly with TDD ( a new test case must first fail) a new solution would ( but does t need to ) require a different test cases...
What do you think ?
Is it possible that TDD is not a good fit for strong “algorithmic tasks” ?


Ron Jeffries
It was important to me that we accumulate the learnings about the application over time by modifying the program to look as if we had known what we were doing all along.
-- Ward Cunningham



-- 


Jakub Milkiewicz
 

If you already have tests:
 [1,2] and 3     true 
[1,2,3] and 5       true 
[1,2,3] and 4         true 
[1,2,3] and 6        False 

and working implementation (based on 2 loops to compare each possible pair in the input array) adding more tests like:
[-1,2,3] and -1       true 
[3,2,-1] and 1        true 
[3,2,-1] and 2         true 


seems to like noise IMHO. None of these extra tests force you to change your implementation (as they immediately pass). Of course you can write them and "Let the computer decide." But as soon as you find them passing from the beginning they are useless ... as they simply cover all cases you already have, like
- summing element #1 and #2
- summing element #2 and #3
- summing element #1 and #3

I can imagine that if your algorithm implemented in your SUT would be different than probably you could end up with different test cases...

br JM


On Thu 16. Jan 2020 at 13:54, Russell Gold <russ@...> wrote:
I don’t understand the idea that they are noise. If they confirm desired behavior, having them seems a plus. So why would you feel a need to remove them?

On Jan 16, 2020, at 7:22 AM, Jakub Milkiewicz <jmilkiewicz@...> wrote:

Thx Ron. Actually I added more tests as requested by interviewer but at the moment I saw then all passing I deleted them as I consider them as a noise and redundant code ( given I stick to my solution) 

P.s I didn’t get the job as my rejection  to add more ( wasteful ) tests made me “hard to manage” in his eyes 

On Thu 16. Jan 2020 at 13:14, Ron Jeffries <ronjeffriesacm@...> wrote:
I don't see anything wrong with adding tests to confirm things. A common case is when someone says "I think that breaks when ...". The ideal response, in my view, is just to add the test and run them. Let the computer decide.

As for adding tests in case of refactoring, that seems speculative to me and therefore probably wasteful. In an interview situation, of course, it's a question of how badly one wants the job offer. 

But there's nothing wrong with writing an additional test in response to a concern or question or doubt.

R

On Jan 16, 2020, at 6:55 AM, Jakub Milkiewicz <jmilkiewicz@...> wrote:

I had a live coding interview yesterday and I faced interesting ( at least for me) issue. 
A small background: I was about to implement a function  to figure out if sum of any 2 elements in input array is equal to given number. Example 
Input.               Output 

[1,2] and 3     true 
[1,2,3] and 5       true 
[1,2,3] and 4         true 
[1,2,3] and 6        False 


As I was constrainted by time (~ 25 mins) I started with TDD but decided to skip most of micro steps.  In the end I implemented something pretty naive ( with O(n^2) complexity - comparing sum of all possible pairs) but it wasn’t highly welcome by interviewer. 
Moreover my interviewer wanted  me to add extra test cases  ( beside the ones which brought me to my solution - as shown above ) just in case when” in future you want to refactor an existing solution to something more sophisticated”. I strongly refused as these tests will not make any sense from TDD point of view: they will all immediately pass. 

Do you believe that adding extra tests cases “for future refactoring” makes sense ? 
I can imagine that for a particular solution of this task ( algorithm being sorting input list + using 2 pointers) If I go strictly with TDD ( a new test case must first fail) a new solution would ( but does t need to ) require a different test cases...
What do you think ?
Is it possible that TDD is not a good fit for strong “algorithmic tasks” ?


Ron Jeffries
It was important to me that we accumulate the learnings about the application over time by modifying the program to look as if we had known what we were doing all along.
-- Ward Cunningham



-- 


Jakub Milkiewicz
 



On Thu, 16 Jan 2020 at 13:42, Ron Jeffries <ronjeffriesacm@...> wrote:
I'd add the test because it would require reasoning about the specific code to decide whether it would (probably) run.

Suppose we implemented it with a hash table, for example :)

And that's my point. An order or particular test cases you wanna have depends (or can depend) on your implementation. If you go for a particular solution, like Hashtable or do sth else it can simply drive you to write a new test case which shall fail first. I thought that idea of TDD (at least how it was defined by Uncle Bob) is to add a new failing test that would force you to change implementation of your SUT, and not simply repeat what is already done... 

br JM
 

R

On Jan 16, 2020, at 7:25 AM, Rene Wiersma via Groups.Io <R.wiersma@...> wrote:

Purely from a TDD point of view it is not necessary to write a test for the last case, as we would not have to add or change any existing code to make it pass. However, in this case I would add the test for completeness sake, and future reference.


Ron Jeffries
The seemingly easy way of learning — by asking — is not necessarily the best.
When you eventually understand, you will understand fully.
— Dragon
   The Line War
   (Neal Asher)


Russell Gold
 

=
On Jan 16, 2020, at 10:43 AM, Jakub Milkiewicz <jmilkiewicz@...> wrote:



On Thu, 16 Jan 2020 at 13:42, Ron Jeffries <ronjeffriesacm@...> wrote:
I'd add the test because it would require reasoning about the specific code to decide whether it would (probably) run.

Suppose we implemented it with a hash table, for example :)

And that's my point. An order or particular test cases you wanna have depends (or can depend) on your implementation. If you go for a particular solution, like Hashtable or do sth else it can simply drive you to write a new test case which shall fail first. I thought that idea of TDD (at least how it was defined by Uncle Bob) is to add a new failing test that would force you to change implementation of your SUT, and not simply repeat what is already done… 

That depends; if you have known requirements, it makes sense to add unit tests to confirm the corresponding behavior, even if they pass when you write them, as you might change your implementation later and any requirements you forget to test for now could easily be forgotten. The fact that your current implementation happens to pass those tests doesn’t make them superfluous.


br JM
 

R

On Jan 16, 2020, at 7:25 AM, Rene Wiersma via Groups.Io <R.wiersma@...> wrote:

Purely from a TDD point of view it is not necessary to write a test for the last case, as we would not have to add or change any existing code to make it pass. However, in this case I would add the test for completeness sake, and future reference.


Ron Jeffries
The seemingly easy way of learning — by asking — is not necessarily the best.
When you eventually understand, you will understand fully.
— Dragon
   The Line War
   (Neal Asher)





Jakub Milkiewicz
 



On Thu, 16 Jan 2020 at 16:57, Russell Gold <russ@...> wrote:
=
On Jan 16, 2020, at 10:43 AM, Jakub Milkiewicz <jmilkiewicz@...> wrote:



On Thu, 16 Jan 2020 at 13:42, Ron Jeffries <ronjeffriesacm@...> wrote:
I'd add the test because it would require reasoning about the specific code to decide whether it would (probably) run.

Suppose we implemented it with a hash table, for example :)

And that's my point. An order or particular test cases you wanna have depends (or can depend) on your implementation. If you go for a particular solution, like Hashtable or do sth else it can simply drive you to write a new test case which shall fail first. I thought that idea of TDD (at least how it was defined by Uncle Bob) is to add a new failing test that would force you to change implementation of your SUT, and not simply repeat what is already done… 

That depends; if you have known requirements, it makes sense to add unit tests to confirm the corresponding behavior, even if they pass when you write them, as you might change your implementation later and any requirements you forget to test for now could easily be forgotten. The fact that your current implementation happens to pass those tests doesn’t make them superfluous.

Of course you can as nothing prevents you from doing it. In the simplified case i already provided above, what other cases can you add to express the requirement? 

I believe that working in micro steps of TDD would allow (in majority cases) to express all requirements as test cases... and other tests would be simply redundant as they will simply duplicate existing test/tests. 
In my case by adding all possible tests
- for empty array
- array with 1 element
- array with 2 elements
-  summing #1 with #2, #1 with #3 and #2 with #3 for array with 3 elements 
and getting to the solution with double/nested loop i can not see any point of adding more tests for my existing implementation. If my implementation would be more sophisticated than I could probably have more/different test cases but for the existing solution i believe it covers all possible options (i am not talking about property-based testing)

br JM

 


br JM
 

R

On Jan 16, 2020, at 7:25 AM, Rene Wiersma via Groups.Io <R.wiersma@...> wrote:

Purely from a TDD point of view it is not necessary to write a test for the last case, as we would not have to add or change any existing code to make it pass. However, in this case I would add the test for completeness sake, and future reference.


Ron Jeffries
The seemingly easy way of learning — by asking — is not necessarily the best.
When you eventually understand, you will understand fully.
— Dragon
   The Line War
   (Neal Asher)





Steve Gordon
 

TDD is a great technique, but does not constitute all of software development.  Just because writing a test that succeeds without writing any additional code may not be a step in TDD does not mean that such tests cannot serve useful software development purposes.

Where I tend to quarrel with interviewers is that I firmly believe in not imposing design patterns until they emerge in the code, which does not tend to happen until there are additional requirements that motivate reuse.  Many interviewers request speculative design patterns that I resist on principle.  It has cost me more than a few jobs.

On Thu, Jan 16, 2020 at 9:12 AM Jakub Milkiewicz <jmilkiewicz@...> wrote:


On Thu, 16 Jan 2020 at 16:57, Russell Gold <russ@...> wrote:
=
On Jan 16, 2020, at 10:43 AM, Jakub Milkiewicz <jmilkiewicz@...> wrote:



On Thu, 16 Jan 2020 at 13:42, Ron Jeffries <ronjeffriesacm@...> wrote:
I'd add the test because it would require reasoning about the specific code to decide whether it would (probably) run.

Suppose we implemented it with a hash table, for example :)

And that's my point. An order or particular test cases you wanna have depends (or can depend) on your implementation. If you go for a particular solution, like Hashtable or do sth else it can simply drive you to write a new test case which shall fail first. I thought that idea of TDD (at least how it was defined by Uncle Bob) is to add a new failing test that would force you to change implementation of your SUT, and not simply repeat what is already done… 

That depends; if you have known requirements, it makes sense to add unit tests to confirm the corresponding behavior, even if they pass when you write them, as you might change your implementation later and any requirements you forget to test for now could easily be forgotten. The fact that your current implementation happens to pass those tests doesn’t make them superfluous.

Of course you can as nothing prevents you from doing it. In the simplified case i already provided above, what other cases can you add to express the requirement? 

I believe that working in micro steps of TDD would allow (in majority cases) to express all requirements as test cases... and other tests would be simply redundant as they will simply duplicate existing test/tests. 
In my case by adding all possible tests
- for empty array
- array with 1 element
- array with 2 elements
-  summing #1 with #2, #1 with #3 and #2 with #3 for array with 3 elements 
and getting to the solution with double/nested loop i can not see any point of adding more tests for my existing implementation. If my implementation would be more sophisticated than I could probably have more/different test cases but for the existing solution i believe it covers all possible options (i am not talking about property-based testing)

br JM

 


br JM
 

R

On Jan 16, 2020, at 7:25 AM, Rene Wiersma via Groups.Io <R.wiersma@...> wrote:

Purely from a TDD point of view it is not necessary to write a test for the last case, as we would not have to add or change any existing code to make it pass. However, in this case I would add the test for completeness sake, and future reference.


Ron Jeffries
The seemingly easy way of learning — by asking — is not necessarily the best.
When you eventually understand, you will understand fully.
— Dragon
   The Line War
   (Neal Asher)





Ron Jeffries
 

Yes. One needs to be careful not to let the tests be implementation-dependent.
That suggests to me that the list of 4 might be too few.

Suppose, for example, that we think (rightly or wrongly) that there can only be positive numbers in the input. Then we might cleverly return false if the first element under comparison is > the match number. 

One wants to implement a small number of tests, but one also needs to be aware of how things fail.

R

On Jan 16, 2020, at 10:36 AM, Jakub Milkiewicz <jmilkiewicz@...> wrote:

If you already have tests:
 [1,2] and 3     true 
[1,2,3] and 5       true 
[1,2,3] and 4         true 
[1,2,3] and 6        False 

and working implementation (based on 2 loops to compare each possible pair in the input array) adding more tests like:
[-1,2,3] and -1       true 
[3,2,-1] and 1        true 
[3,2,-1] and 2         true 


seems to like noise IMHO. None of these extra tests force you to change your implementation (as they immediately pass). Of course you can write them and "Let the computer decide." But as soon as you find them passing from the beginning they are useless ... as they simply cover all cases you already have, like
- summing element #1 and #2
- summing element #2 and #3
- summing element #1 and #3

I can imagine that if your algorithm implemented in your SUT would be different than probably you could end up with different test cases...


Ron Jeffries
ronjeffries.com
Leave me alone, I know what I'm doing. -- Kimi Raikkonen


Nat Pryce
 

This is a situation where I’d consider refactoring the example based tests to property-based tests. 

This would make the intent of the tests more explicit, because the reader does not need to infer the properties from triangulating example-based tests. 

It would provide more extensive coverage, thereby supporting a future refactoring that might make the coverage of the original examples insufficient. 

And it would uncover any “unknown unknowns” that didn’t occur to the developer(s) while they were coming up with examples.

—Nat

On Thu, 16 Jan 2020 at 11:55, Jakub Milkiewicz <jmilkiewicz@...> wrote:
I had a live coding interview yesterday and I faced interesting ( at least for me) issue. 
A small background: I was about to implement a function  to figure out if sum of any 2 elements in input array is equal to given number. Example 
Input.               Output 

[1,2] and 3     true 
[1,2,3] and 5       true 
[1,2,3] and 4         true 
[1,2,3] and 6        False 


As I was constrainted by time (~ 25 mins) I started with TDD but decided to skip most of micro steps.  In the end I implemented something pretty naive ( with O(n^2) complexity - comparing sum of all possible pairs) but it wasn’t highly welcome by interviewer. 
Moreover my interviewer wanted  me to add extra test cases  ( beside the ones which brought me to my solution - as shown above ) just in case when” in future you want to refactor an existing solution to something more sophisticated”. I strongly refused as these tests will not make any sense from TDD point of view: they will all immediately pass. 

Do you believe that adding extra tests cases “for future refactoring” makes sense ? 
I can imagine that for a particular solution of this task ( algorithm being sorting input list + using 2 pointers) If I go strictly with TDD ( a new test case must first fail) a new solution would ( but does t need to ) require a different test cases...
What do you think ?
Is it possible that TDD is not a good fit for strong “algorithmic tasks” ?

Br JM 
--


Tim Ottinger
 

I don’t know how many tests you really need for a python one-liner like that.

`return any((sum(pair)==6) for pair in permutations(list, 2))`

Check empty list returns false 
Check for no matches
Check for one match
Check for invalid input handling

There really aren’t many invariants to lock down for refactoring purposes.
I guess you could do more than that, but at some point you’re testing the compiler and the library (someone else’s code) and not your algorithm.


--
Peace,


Jakub Milkiewicz
 

On Thu, 16 Jan 2020 at 18:56, Nat Pryce <nat.pryce@...> wrote:
This is a situation where I’d consider refactoring the example based tests to property-based tests. 

This would make the intent of the tests more explicit, because the reader does not need to infer the properties from triangulating example-based tests. 

It would provide more extensive coverage, thereby supporting a future refactoring that might make the coverage of the original examples insufficient. 

And it would uncover any “unknown unknowns” that didn’t occur to the developer(s) while they were coming up with examples.

—Nat

On Thu, 16 Jan 2020 at 11:55, Jakub Milkiewicz <jmilkiewicz@...> wrote:
I had a live coding interview yesterday and I faced interesting ( at least for me) issue. 
A small background: I was about to implement a function  to figure out if sum of any 2 elements in input array is equal to given number. Example 
Input.               Output 

[1,2] and 3     true 
[1,2,3] and 5       true 
[1,2,3] and 4         true 
[1,2,3] and 6        False 


As I was constrainted by time (~ 25 mins) I started with TDD but decided to skip most of micro steps.  In the end I implemented something pretty naive ( with O(n^2) complexity - comparing sum of all possible pairs) but it wasn’t highly welcome by interviewer. 
Moreover my interviewer wanted  me to add extra test cases  ( beside the ones which brought me to my solution - as shown above ) just in case when” in future you want to refactor an existing solution to something more sophisticated”. I strongly refused as these tests will not make any sense from TDD point of view: they will all immediately pass. 

Do you believe that adding extra tests cases “for future refactoring” makes sense ? 
I can imagine that for a particular solution of this task ( algorithm being sorting input list + using 2 pointers) If I go strictly with TDD ( a new test case must first fail) a new solution would ( but does t need to ) require a different test cases...
What do you think ?
Is it possible that TDD is not a good fit for strong “algorithmic tasks” ?

Br JM 
--

--


Ron Jeffries
 

Yes ... this is a case where removing duplication in tests would appeal to me more than it usually does.

R

On Jan 16, 2020, at 12:56 PM, Nat Pryce <nat.pryce@...> wrote:

This is a situation where I’d consider refactoring the example based tests to property-based tests. 

This would make the intent of the tests more explicit, because the reader does not need to infer the properties from triangulating example-based tests. 

It would provide more extensive coverage, thereby supporting a future refactoring that might make the coverage of the original examples insufficient. 

And it would uncover any “unknown unknowns” that didn’t occur to the developer(s) while they were coming up with examples.


Ron Jeffries
I don't necessarily agree with everything I say. -- Marshall McLuhan


Ron Jeffries
 

I suppose that's what one is supposed to do with that example. :)

R

On Jan 16, 2020, at 1:06 PM, Tim Ottinger <tottinge@...> wrote:

I don’t know how many tests you really need for a python one-liner like that.

`return any((sum(pair)==6) for pair in permutations(list, 2))`

Check empty list returns false 
Check for no matches
Check for one match
Check for invalid input handling

There really aren’t many invariants to lock down for refactoring purposes.
I guess you could do more than that, but at some point you’re testing the compiler and the library (someone else’s code) and not your algorithm.


Ron Jeffries
If it is more than you need, it is waste. -- Andy Seidl


John Donaldson
 

Why not let the interviewer be the “additional requirements that motivate reuse”?

You can explain your reservations but then go ahead with the new requirements.

JD

 

From: testdrivendevelopment@groups.io <testdrivendevelopment@groups.io> On Behalf Of Steve Gordon
Sent: 16 January 2020 17:31
To: testdrivendevelopment@groups.io
Subject: Re: [testdrivendevelopment] Tests “scoped” to implementation

 

TDD is a great technique, but does not constitute all of software development.  Just because writing a test that succeeds without writing any additional code may not be a step in TDD does not mean that such tests cannot serve useful software development purposes.

 

Where I tend to quarrel with interviewers is that I firmly believe in not imposing design patterns until they emerge in the code, which does not tend to happen until there are additional requirements that motivate reuse.  Many interviewers request speculative design patterns that I resist on principle.  It has cost me more than a few jobs.

 

On Thu, Jan 16, 2020 at 9:12 AM Jakub Milkiewicz <jmilkiewicz@...> wrote:

 

 

On Thu, 16 Jan 2020 at 16:57, Russell Gold <russ@...> wrote:

=

On Jan 16, 2020, at 10:43 AM, Jakub Milkiewicz <jmilkiewicz@...> wrote:

 

 

On Thu, 16 Jan 2020 at 13:42, Ron Jeffries <ronjeffriesacm@...> wrote:

I'd add the test because it would require reasoning about the specific code to decide whether it would (probably) run.

 

Suppose we implemented it with a hash table, for example :)

 

And that's my point. An order or particular test cases you wanna have depends (or can depend) on your implementation. If you go for a particular solution, like Hashtable or do sth else it can simply drive you to write a new test case which shall fail first. I thought that idea of TDD (at least how it was defined by Uncle Bob) is to add a new failing test that would force you to change implementation of your SUT, and not simply repeat what is already done… 

 

That depends; if you have known requirements, it makes sense to add unit tests to confirm the corresponding behavior, even if they pass when you write them, as you might change your implementation later and any requirements you forget to test for now could easily be forgotten. The fact that your current implementation happens to pass those tests doesn’t make them superfluous.

 

Of course you can as nothing prevents you from doing it. In the simplified case i already provided above, what other cases can you add to express the requirement? 

 

I believe that working in micro steps of TDD would allow (in majority cases) to express all requirements as test cases... and other tests would be simply redundant as they will simply duplicate existing test/tests. 

In my case by adding all possible tests

- for empty array

- array with 1 element

- array with 2 elements

-  summing #1 with #2, #1 with #3 and #2 with #3 for array with 3 elements 

and getting to the solution with double/nested loop i can not see any point of adding more tests for my existing implementation. If my implementation would be more sophisticated than I could probably have more/different test cases but for the existing solution i believe it covers all possible options (i am not talking about property-based testing)

 

br JM

 

 



 

br JM

 

 

R



On Jan 16, 2020, at 7:25 AM, Rene Wiersma via Groups.Io <R.wiersma@...> wrote:

 

Purely from a TDD point of view it is not necessary to write a test for the last case, as we would not have to add or change any existing code to make it pass. However, in this case I would add the test for completeness sake, and future reference.

 


Ron Jeffries

The seemingly easy way of learning — by asking — is not necessarily the best.

When you eventually understand, you will understand fully.

— Dragon

   The Line War

   (Neal Asher)

 

 

 

 


Nat Pryce
 

On Thu, 16 Jan 2020 at 18:24, Jakub Milkiewicz <jmilkiewicz@...> wrote:

Surprisingly, no I can’t! I googled but found nothing.

It’d be a good topic for an article, if anyone is thinking of something to write about.


On Thu, 16 Jan 2020 at 18:56, Nat Pryce <nat.pryce@...> wrote:
This is a situation where I’d consider refactoring the example based tests to property-based tests. 

This would make the intent of the tests more explicit, because the reader does not need to infer the properties from triangulating example-based tests. 

It would provide more extensive coverage, thereby supporting a future refactoring that might make the coverage of the original examples insufficient. 

And it would uncover any “unknown unknowns” that didn’t occur to the developer(s) while they were coming up with examples.

—Nat

On Thu, 16 Jan 2020 at 11:55, Jakub Milkiewicz <jmilkiewicz@...> wrote:
I had a live coding interview yesterday and I faced interesting ( at least for me) issue. 
A small background: I was about to implement a function  to figure out if sum of any 2 elements in input array is equal to given number. Example 
Input.               Output 

[1,2] and 3     true 
[1,2,3] and 5       true 
[1,2,3] and 4         true 
[1,2,3] and 6        False 


As I was constrainted by time (~ 25 mins) I started with TDD but decided to skip most of micro steps.  In the end I implemented something pretty naive ( with O(n^2) complexity - comparing sum of all possible pairs) but it wasn’t highly welcome by interviewer. 
Moreover my interviewer wanted  me to add extra test cases  ( beside the ones which brought me to my solution - as shown above ) just in case when” in future you want to refactor an existing solution to something more sophisticated”. I strongly refused as these tests will not make any sense from TDD point of view: they will all immediately pass. 

Do you believe that adding extra tests cases “for future refactoring” makes sense ? 
I can imagine that for a particular solution of this task ( algorithm being sorting input list + using 2 pointers) If I go strictly with TDD ( a new test case must first fail) a new solution would ( but does t need to ) require a different test cases...
What do you think ?
Is it possible that TDD is not a good fit for strong “algorithmic tasks” ?

Br JM 
--

--

--


Mauricio Aniche
 

Hi Jakub,

My two cents: to me, there's a subtle difference between writing tests
(to guide the development) and testing (to make sure that are no bugs)
[1].

I think it's great that TDD guides you/us through the implementation
of the problem. But, in my experience, developers rarely do proper
systematic testing when doing TDD (or when at the initial stages of
the implementation). There's just too much going on in terms of "what
the requirements are", "how to better design the classes", "how to
integrate it with the rest of my system", "let me see how the overall
feature looks like when I start the app", etc.

As soon as the "big chunk of implementation" is done, I tend to then
dive into more systematic testing practices. For example, make sure
I'm testing all the boundaries [2] or even revisiting the
specification and making sure I explored the entire domain [3].

To sum up, my common flow is: I do TDD at the beginning, as I really
think it helps me to think better. When I'm kinda having that feeling
that "I'm done", I then dive into testing.

Cheers,

[1] https://www.mauricioaniche.com/blog/testing-vs-writing-tests/
[2] https://sttp.site/boundary-testing/
[3] https://sttp.site/specification-based-testing/

--
Maurício Aniche
Delft University of Technology
http://www.mauricioaniche.com

On Thu, Jan 16, 2020 at 12:55 PM Jakub Milkiewicz <jmilkiewicz@gmail.com> wrote:

I had a live coding interview yesterday and I faced interesting ( at least for me) issue.
A small background: I was about to implement a function to figure out if sum of any 2 elements in input array is equal to given number. Example
Input. Output

[1,2] and 3 true
[1,2,3] and 5 true
[1,2,3] and 4 true
[1,2,3] and 6 False


As I was constrainted by time (~ 25 mins) I started with TDD but decided to skip most of micro steps. In the end I implemented something pretty naive ( with O(n^2) complexity - comparing sum of all possible pairs) but it wasn’t highly welcome by interviewer.
Moreover my interviewer wanted me to add extra test cases ( beside the ones which brought me to my solution - as shown above ) just in case when” in future you want to refactor an existing solution to something more sophisticated”. I strongly refused as these tests will not make any sense from TDD point of view: they will all immediately pass.

Do you believe that adding extra tests cases “for future refactoring” makes sense ?
I can imagine that for a particular solution of this task ( algorithm being sorting input list + using 2 pointers) If I go strictly with TDD ( a new test case must first fail) a new solution would ( but does t need to ) require a different test cases...
What do you think ?
Is it possible that TDD is not a good fit for strong “algorithmic tasks” ?

Br JM
--