[Dev] 27000 errors in the Tizen operating system

Andrey Karpov karpov at viva64.com
Fri Jul 14 14:19:36 UTC 2017


Hello Carsten,

After publishing my article, several unreasonable news appeared on the 
Internet. That's why, I'd like to note that in my article I didn’t write 
about the bad/good quality of Tizen code or that PVS-Studio analyzer is 
magic, best of the best. I only gave those numbers that I’d got. I can't 
talk about the quality of Tizen code, since I have insufficient data for 
this purpose. I understand clearly what you are talking about and agree 
with you.

My objective was to show that despite the already used techniques, 
PVS-Studio analyzer can help to make Tizen code better and more 
reliable. As I think, I managed to demonstrate this by pointing to the 
900 fragments of code, which, in my opinion, deserve attention, fixing 
and refactoring.

Unfortunately, the question "Include this as a bugs per 1 k lines of 
code or similar metric?" is not very clear.

In my opinion, the article presents all the necessary data. I've got:

  * The density of detected errors in code (c) 2015 Samsung Electronics:
    0.41 errors on 1000 lines of code.
  * The density of detected errors in the third-party libraries: 0.36
    errors on 1000 lines of code.

(I did not consider comments as the lines of code).

Can these data be incorrect? Yes, they can. This is not a scientific 
research, this is a demonstration on practice that the tool may be useful.

Moreover, some errors, in Tizen developers’ opinion, canbe not that 
erroneous. Well, at least, there is no sense to fix them. Then the 
density of detected errors will diminish.

On the other hand, I might highlight not all the errors. I approached to 
the study of the report very carefully, but without bigotry. For 
example, I was a bit lazy to study the warnings V730 - 
https://www.viva64.com/en/w/V730/. This is a very time consuming and 
thankless work, when you work with someone else's code. All the time it 
is unclear, if it is dangerous or not, that some class member has been 
left uninitialized. It is a tedious long labour that needs to be done 
carefully. So, perhaps, with more careful reviewing of the log, other 
errors can be found.

About the comparison with the quality of other projects... Difficult 
question. I ask to understand, that while writing the articles we don’t 
have the aim to compare which code is better or worse.Therefore, we 
usually stop when found enough of quite interesting bugs for writing an 
article. Performing the careful analysis of all warnings for a large 
project will take a lot of time. I also ask to consider that it is 
difficult and long to deal with unknown code. Therefore, we sometimes 
mention about density of errors only for small projects, since is not 
very difficult to view the whole report. Examples:

  * Notepad++: we detect about 2 errors per 1000 lines of code.
    https://www.viva64.com/en/b/0511/
  * Far Manager for Linux: we detect about 0.464 errors per 1000 lines
    of code. https://www.viva64.com/en/b/0478/
  * Tor project: we do not find anything. Density
    0.https://www.viva64.com/en/b/0507/ <https://www.viva64.com/en/b/0507/>

As we can see, the results are different. However, it seems to me they 
are not worth to concentrate on. Static analyzer is a tool of finding 
bugs in fresh code. Yes, old mistakes are also worth to be fixed, but 
generally they are not as critical as new ones. Actually, if the error 
is in the code for several years,it means that it rarely reveals itself 
or interferes no one. That’s why it is interesting to look to the future 
rather than the past. Sure, the PVS-Studio analyzer can be a good 
assistant for a programmer.

About the percent of false positives. It makes no sense to talk about 
them without first configuring the analyzer. It's a lot of work, which 
we are ready to engage, if a cooperation begins someday. Can we deal 
with it? Yes, we can:

  * https://www.unrealengine.com/en-US/blog/how-pvs-studio-team-improved-unreal-engines-code
  * https://www.unrealengine.com/en-US/blog/static-analysis-as-part-of-the-process

I think when it comes to projects of such a large size as Tizen, it 
makes sense to speak not only on product licensing, but also on a great 
support, carried out by our team.

P.S. On Monday I will demonstrate that the analyzer can be useful not 
only for finding bugs, but also regarding micro-optimizations. :)

  ----
Best regards,
Andrey Karpov, Microsoft MVP,
Ph.D. in Mathematics, CTO
"Program Verification Systems" Co Ltd.


On 14.07.2017 4:35, Carsten Haitzler wrote:
> On Thu, 13 Jul 2017 14:26:35 +0300
> Andrey Karpov <karpov at viva64.com> wrote:
>
> Could you:
>
> 1. Include this as a bugs per 1k lines of code or similar metric? Total
> bugs is not that useful without knowing total size of code looked at.
> At least in the summary.
> 2. Include metrics calculated similarly for other major projects (Linux
> kernel, etc. etc.).
>
> Why? The below is like saying "you're doing 120km/h!!!!!!" ... but if
> it's on a freeway and the speed limit is 130km/h ... in context it's
> very different. This here lacks context.
>
> As I haven't used PVS studio before (it's on a list of things to try
> out and see if it's good), but I do know Coverity's scan service very
> well, I'll do some back of a napkin numbers:
>
> 1. In my experience about ~10-15% of bugs are false positives etc. with
> coverity.
> 2. Coverity says Linux kernel gets 0.48 issues peer 1k lines of code.
> applying the above false positive rate, let's call that 0.40. Qt gets
> 0.72, so lets call that 0.61 adjusting for false positives. Glib gets
> 0.45, so 0.38 accounting for false positives. So:
>
> With your numbers, Tizen sees 900 issues in 2.4 million lines of code.
> that comes out at 0.38.
>
>     Linux kernel = 0.40
>     Qt           = 0.61
>     Glib         = 0.38
>     Tizen        = 0.38
>
> Yes PVS studio is a different tool to coverity. I'm making an
> assumption (much like you do too in many ways) that these two tools are
> in the same ballpark and will report similar issues and numbers, but
> may be disjoint sets. I'm going with this assumption because you didn't
> provide other numbers to go by, and it'd be nice to.
>
> My conclusion is that Tizen code quality is pretty decent in the scheme
> of things. It's bug rate is pretty low-ish.
>
> Now on the other side, it';s always great to have tools point out
> possible errors. Another tool is another weapon in a war chest to
> improve code quality. That's a good thing. Bugs should be looked into
> and addressed accordingly based on actual severity and context. just
> blindly fixing issues will result in misallocation of time and
> resources because it may be an issue in a debug tool that is rarely
> used and only for gathering quick information by a developer when
> something goes wrong... it may be a seriously exploitable bug in code
> that is always able to be triggered remotely. So context is important.
> Knowing issues are there and what a tool thinks they are is a great
> speedup vs full code review. PVS Studio is indeed such a tool. There
> are others too. We have tools of our own we're using more and more.
>
>
>
>> Hello All,
>>
>> This article will demonstrate that during the development of large
>> projects static analysis is not just a useful, but a completely
>> necessary part of the development process. This article is the first
>> one in a series of posts, devoted to the ability to use PVS-Studio
>> static analyzer to improve the quality and reliability of the Tizen
>> operating system. For a start, I checked a small part of the code of
>> the operating system (3.3%) and noted down about 900 warnings
>> pointing to real errors. If we extrapolate the results, we will see
>> that our team is able to detect and fix about 27000 errors in Tizen.
>> Using the results of the conducted study, I made a presentation for
>> the demonstration to the Samsung representatives with the offers
>> about possible cooperation. The meeting was postponed, that is why I
>> decided not to waste time and transform the material of the
>> presentation to an article: https://www.viva64.com/en/b/0519/
>>
>> ----
>> Best regards,
>> Andrey Karpov, Microsoft MVP,
>> Ph.D. in Mathematics, CTO
>> "Program Verification Systems" Co Ltd.
>>
>> _______________________________________________
>> Dev mailing list
>> Dev at lists.tizen.org
>> https://lists.tizen.org/listinfo/dev
>>
>>
>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.tizen.org/pipermail/dev/attachments/20170714/1b49e0f7/attachment-0001.html>


More information about the Dev mailing list