For example, the same kind of argument would apply to a language or library that aimed at providing support for Interval Analysis IA. A fundamental computational aspect which is required for proper IA support is directed rounding: just exposing directed rounding would be enough to allow efficient custom implementations of IA, and also allow other numerical feats as discussed here ; conversely, while it's possible to provide support for IA without exposing the underlying required directed rounding features, doing so results in an inefficient, inflexible standard 1.

To clarify, I'm not actually against the presence of high-level reduction and scan functions in OpenCL. They are definitely a very practical and useful set of functions, with the potential of very efficient implementations by vendors —in fact, more efficient than any programmer may achieve, not just because they can be tuned by the vendor for the specific hardware, but also because they can in fact be implemented making use of hardware capabilities that are not exposed in the standard nor via extensions.

The problem is that the set of available functions is very limited and must be so , and as soon as a developer needs a reduction or scan function that is even slightly different from the ones offered by the language, it suddenly becomes impossible for such a reduction or scan to be implemented with the same efficiency of the built-in ones, simply because the underlying hardware capabilities necessary for the optimal implementation are not available to the developer.

Interesting enough, I've hit a similar issue while working on a different code base, which makes use of CUDA rather than OpenCL, and for which we rely on the thrust library for the most common reduction operations. Despite this flexibility, however, even the thrust library cannot move easily beyond stateless reduction operators, so that, for example, one cannot trivially implement a parallel reduction with Kahan summation using only the high-level features offered by thrust. Of course, this is not a problem per se , since ultimately thrust just compiles to plain CUDA code, and it is possible to write such code by hand, thus achieving a Kahan summation parallel reduction, as efficiently as the developer's prowess allows.

And since CUDA exposes most if not all hardware intrinsics, such a hand-made implementation can in fact be as efficient as possible on any given CUDA-capable hardware. The situation in OpenCL is sadly much worse, and not so much due to the lack of a high-level library such as thrust to which end one may consider the Bolt library instead , but because the language itself is missing the fundamental building blocks to produce the most efficient reductions: and while it does offer built-ins for the most common operations, anything beyond that must be implemented by hand, and cannot be implemented as efficiently as the hardware allows.

There is also another point to consider, and it has to do with the sad state of the OpenCL ecosystem. Developers that want to use OpenCL for their software, be it in academia, gaming, medicine or any industry, must face the reality of the quality of existing OpenCL implementations. And while for custom solutions one can focus on a specific vendor, and in fact choose the one with the best implementations, software vendors have to deal with the idiosyncrasies of all OpenCL implementations, and the best they can expect is for their customers to be up to date with the latest drivers.

What this implies in this context is that developers cannot, in fact, rely on high-level functions being implemented efficiently, nor can they sit idle waiting for the vendors to provide more efficient implementations: more often than not, developers will find themselves working around the limitations of this and that implementation, rewriting code that should be reduced to one liners in order to provide custom, faster implementations.

Therefore, can we actually expect vendors to really implement the work-group reduction and scan operations as efficiently as their hardware allows? I doubt it. However, while for the memory copies an efficient workaround was offered by simple loads, such a workaround is impossible in OpenCL 2. Before version 2. The feature reflected the capability of GPUs when the standard was first proposed, and could be trivially emulated on other hardware by making use of global memory generally resulting in a performance hit.

With version 2. These functions can be implemented via local memory, but most modern hardware can implement them using lower-level intrinsics that do not depend on local memory at all, or only depend on local memory in smaller amounts than would be needed by a hand-coded implementation. On GPUs, work-groups are executed in what are called warps or wave-fronts, and most modern GPUs can in fact exchange data between work-items in the same warp using specific shuffle intrinsics which have nothing to do with the OpenCL C shuffle function : these intrinsics allow work-items to access the private registers of other work-items in the same warp.

While warps in the same work-group still have to communicate using local memory, a simple reduction algorithm can thus be implemented using warp shuffle instructions and only requiring one word of local memory per warp, rather than one per work-item, which can lead to better hardware utilization e. Additionally, vectorizing CPU platforms such as Intel's can trivially implement them in the form of vector component swizzling.

Finally, all other hardware can still emulate them via local memory which in turn might be inefficiently emulated via global memory, but still : and as inefficient as such an emulation might be, it still would scarcely be worse than hand-coded use of local memory which would still be a fall-back option to available to developers. In practice, this means that all OpenCL hardware can implement work-group shuffle instructions some more efficiently than others , and parallel reductions of any kind could be implemented through work-group shuffles, achieving much better performance than standard local-memory reductions on hardware supporting work-group shuffles in hardware, while not being less efficient than local-memory reductions where shuffles would be emulated.

Finally, it should be obvious now that the choice of exposing work-group reduction and scan functions, but not work-group shuffle functions in OpenCL 2. The obvious solution would be to provide work-group shuffle instructions at the language level. This could in fact be a core feature, since it can be supported on all hardware, just like local memory, and the device could be queries to determine if the instructions are supported in hardware or emulated pretty much like devices can be queried to determine if local memory is physical or emulated.

Optionally, it would be nice to have some introspection to allow the developer to programmatically find the warp size i. The terminal, as powerful as it might be, has a not undeserved fame of being boring. Boring white or some other, fixed, color on boring black or some other, fixed, color for everything.

Yet displays nowadays are capable of showing millions of colors, and have been able to display at least four since the eighties at least. A lot of modern programs will even try to use colors in their output right from the start, making it easier to tell apart semantically different parts of it. One of the last strongholds of the boring white-on-black or conversely terminal displays is man , the manual page reader. Man pages constitute the backbone of technical documentation in Unix-like systems, and range from the description of the syntax and behaviour of command-line programs to the details of system calls and programming interfaces of libraries, passing through a description of the syntax of configuration files, and whatever else one might feel like documenting for ease of access.

The problem is, man pages are boring. In fact, there's a pager most that does this by default. Of course, most is otherwise inferior in many ways to the more common less pager, so there are solutions to do the same trick color replacement with less. Both solutions, as well as a number of other tricks based on the same principle, are pretty well documented in a number of places, and can be found summarized on the Arch wiki page on man pages. I must say I'm not too big a fan of this approach: while it has the huge advantage of being very generic in fact, maybe a little too generic , it has a hackish feeling, which had me look for a cleaner, lower level approach: making man itself or rather the groff typesetter it uses colorize its output.

The approach I'm going to present will only work if man uses a recent enough version of groff that actually supports colors. Also, the approach is limited to specific markup. It can be extended, but doing so robustly is non-trivial. By default, groff will look for macro packages in a lot of places, among which the user's home directory. The groff macro package used to typeset man pages includes an arbitrary man. We will write our own man. There are a few exceptions, most notably the. SH command to typeset section headers.

So in this example we will only override. SH to set section headers to green, leaving the rest of the man pages as-is. Instead of re-defining. SH from scratch, we will simply expand it by adding stuff around the original definition. The code above renames. SH to. SHorg , and then defines a new. SH command that:. The exact same approach can be used to colorize the second-level section header macro,. SS ; just repeat the same code with a general replacement of H to S , and tune the color to your liking.

Another semantic markup that is rather easy to override, even though it's only rarely used in actual man pages possibly because it's a GNU extension , is the. UE pair of commands to typeset URLs, and its counterpart. ME pair of commands to typeset email addresses. Keep in mind that I'm not a groff expert, so there might be better ways to achieve these overrides. The more recent versions of grotty the groff post-processor for terminal output uses ANSI SGR escape codes for formatting, supporting both colors and emboldening, italicizing and underlining.

On some distributions Debian, for example , this is disabled by default, and must be enabled with some not-well-document method e. Of course, you should also make sure the pager used by man supports SGR escape sequences, for example by making your pager be less which is likely to be the default already, if available and telling it to interpret SGR sequences e.

That's it. Now section headers, emails and URLs will come out typeset in color, provided man pages are written using semantic markup. It is also possible to override the non-semantic markup that is used everywhere else, such as all the macros that combine or alternate B , I and R to mark options, parameters, arguments and types. This would definitely make pages more colored, but whether or not they will actually come out decently is all to be seen.

A much harder thing to achieve is the override of commands that explicitly set the font e. But at this point the question becomes: is it worth the effort?

## dhuociu.tk Ebooks and Manuals

Wouldn't it be better to start a work on the cleanup and extension of the man macro package for groff to include and use! If your terminal emulator truly supports italics honestly, a lot of modern terminals do, except possibly the non-graphical consoles , you can configure grotty to output instructions for italics instead of the usual behavior of replacing italics with underline. This is achieved by passing the -i option to grotty. Since grotty is rarely if ever called directly, one would usually pass -P-i option to groff.

Opera is dead. I decided to give it some time, see how things developed my first article on the topic was from over two years ago, and the more recent one about the switch to Blink was from February last year , but it's quite obvious that the Opera browser some of us knew and loved is dead for good. For me, the finishing blow was a comment from an Opera employee, in response to my complaints about the regressions in standard support when Opera switched from Presto to Blink as rendering engine:.

You may have lost a handful of things, but on the other hand you have gained a lot of other things that are not in Presto. The things you have gained are likely more useful in more situations as well. Whether you want to use Opera or not is entirely up to you. I merely pointed out that for the lost standards support, you are gaining a lot of other things and those things are likely to be more useful in most cases.

But it gets even better. I'm obviously not the only one complainig about the direction the new Opera has taken. One Patata Johnson comments :. There used to be a time when Opera fought for open standards and against Microsoft's monopol with it's IE. Am I the only one who us concerned about their new path? Opera is in a much better position to promote open standards with Blink than with Presto.

It's kind of hard to influence the world when the engine is basically being ignored. How does being another skin over Blink help promote open standards? It helps promote Blink experimental features, lack of standard compliance, and buggy implementation of the standards it does support. That does as much to promote open standards as the Trident skins did during the 90s browser wars.

As small as Opera's market share was before the switch, its rendering engine was independent and precisely because of that it could be used to push the others into actually fixing their bugs and supporting the standards. It might have been ignored by the run-of-the-mill web developers, but it was actually useful in promoting standard compliance by being a benchmark against which other rendering engines were compared.

The first thing that gets asked when someone reports a rendering issue is: how does it behave in the other rendering engines? If there are no other rendering engines, bugs in the dominant one become the de facto standard, against the open standard of the specification. With the switch to Blink, Opera has even lost that role. As minor a voice as it might have been, it has now gone completely silent. And let's be serious: the rendering engine it uses might not be ignored now it's not their own, anyway , but I doubt that Opera has actually gained anything in terms of user base, and thus weight.

If anything, I'm seeing quite a few former supporters switching away. Honestly, I suspect Opera's survival is much more in danger now than it was before the switch. The truth is, the new Opera stands for nothing that the old Opera stood for: the old Opera stood for open standards, compliance, and a feature-rich, highly-customizable Internet suite. The new one is anything but that. At the very least, for people that miss the other qualities that made Opera worthwile among which the complete, highly customizable user interface, and the quite complete Internet suite capabilities, including mail, news, RSS, IRC and BitTorrent suport there's now the open-source Otter browser coming along.

It's still WebKit-based , so it won't really help break the development of a web monoculture, but it will at least offer a more reliable fallback to those loving the old Opera and looking for an alterntive to switch to from the new one. For my part, I will keep using the latest available Presto version of Opera for as long as possible. In the mean time, Firefox has shown to have most complete support for current open standards, so it's likely to become my next browser of choice.

I will miss Opera's UI, but maybe Otter will also support Gecko as rendering engine, and I might be able to get the best of both world. While it can be used as a library from other Ruby programs, its possibly most interesting use is as a command line filter : it can read the data series to be analyzed from its standard input one datum per line , and it produces the relevant statistics on its standard output. Typical usage would be something like:. For example, statistics on the number of lines of the source files for one of my ongoing creative works at the time of writing:.

Command line options such as --[no-]histogram and --[no-]boxplot can be used to override the default choices on what to plot if anything , and options such as --dumb can be used to let gnuplot output a textual approximation of the plot s on the terminal itself. One would assume that doing integer math with computers would be easy. So an 8-bit byte can represent distinct values, a bit word can represent distinct values, a bit word can represent 4,,,, and a bit word a whooping 18,,,,,,, over 18 short trillion.

Of course the question now is: which ones? Let's consider a standard 8-bit byte. The most obvious and natural interpretation of a byte i. So binary would be 0, binary would be decimal 1, binary would be decimal 2 binary would be decimal 3 and so on, up to binary which would be decimal From 0 to inclusive, that's exactly the values that can be represented by a byte read as an unsigned integer.

Unsigned integers can be trivially promoted to wider words e. This is so simple that it's practically boring. Why are we even going through this? Because things are not that simple once you move beyond unsigned integers. But before we do that, I would like to point out that things aren't that simple even if we're just sticking to non-negative integers. Let's stick to just addition and multiplication at first, which are the simplest and best defined operations on integers.

Of course, the trouble is that if you are adding or multiplying two numbers between 0 and , the result might be bigger than In general, if you are operating on n -bits numbers, the result might not be representable in n bits. So what does the computer do when this kind of overflow happens? Most programmers will now chime in and say: well duh , it wraps! However, this is not the only possibility.

For example, specialized DSP Digital Signal Processing hardware normally operates with saturation arithmetic: overflowing values are clamped to the maximum representable value. In particular, in saturation arithmetic algebraic addition is not associative, and multiplication does not distribute over algebraic addition. By contrast, with modular arithmetic, both expressions in each case give the correct result.

So, when the final result is representable , modular arithmetic gives the correct result in the case of a static sequence of operations. However, when the final result is not representable, saturation arithmetic returns values that are closer to the correct one than modular arithmetic: is clamped to , in contrast to the severely underestimated Being as close as possible to the correct results is an extremely important property not just for the final result, but also for intermediate results, particularly in the cases where the sequence of operations is not static, but depends on the magnitude of the values for example, software implementations of low- or high-pass filters.

In these applications of which DSP, be it audio, video or image processing, is probably the most important one both modular and saturation arithmetic might give the wrong result, but the modular result will usually be significantly worse than that obtained by saturation. For example, modular arithmetic might miscompute a frequency of Hz as 44Hz instead of Hz, and with a threshold of Hz this would lead to attenuation of a signal that should have passed unchanged, or conversely. Amplifying an audio signal beyond the representable values could result in silence with modular arithmetic, but it will just produce the loudest possible sound with saturation.

We mentioned that promotion of unsigned values to wider data types is trivial. What about demotion? For example, knowing that your original values are stored as 8-bit bytes and that the final result has to be again stored as an 8-bit byte, a programmer might consider operating with bit or wider words to try and prevent overflow during computations. However, when the final result has to be demoted again to an 8-bit byte, a choice has to be made, again: should we just discard the higher bits which is what modular arithmetic does , or return the highest representable value when any higher bits are set which is what saturation arithmetic does?

Of course, the real problem in the examples presented in the previous section is that the data type used e. One of the most important things programmers should consider, maybe the most important, when discussing doing math on the computer, is precisely choosing the correct data type. For integers, this means choosing a data type that can represent correctly not only the starting values and the final results, but also the intermediate values. If your data fits in 8 bits, then you want to use at least 16 bits.

If it fits in 16 bits but not 8 , then you want to use at least 32, and so on. Having a good understanding of the possible behaviors in case of overflow is extremely important to write robust code, but the main point is that you should not overflow. In case you are still of the opinion that integer math is easy, don't worry. We still haven't gotten into the best part, which is how to deal with relative numbers , or, as the layman would call them, signed integers.

However, let's be honest here, non-negative integers are pretty limiting. We would at least like to have the possibility to also specify negative numbers. And here the fun starts. Although there is no official universal standard for the representation of relative numbers signed integers on computers, there is undoubtedly a dominating convention, which is the one programmers are nowadays used to: two's complement.

However, this is just one of many no less than four possible representations:. One of the issues with the representation of signed integers in binary computers is that binary words can always represent an even number of values, but a symmetrical amount of positive and negative integers, plus the value 0, is odd. Hence, when choosing the representation, one has to choose between either:. Of the four signed number representations enumerated above, the sign bit and ones' complement representations have a signed zero, but each non-zero number has a representable opposite, while two's complement and bias only have one value for zero, but have at least one non-zero number that has no representable opposite.

Offset binary is actually very generic and can have significant asymmetries in the ranges of representable numbers. The biggest issue with having a negative zero is that it violates a commonly held assumption, which is that there is a bijective correspondence between representable numerical values and their representation, since both positive and negative 0 have the same numerical value 0 but have distinct bit patterns.

Where this presents the biggest issue is in the comparison of two words. When comparing words for equality, we are now posed a conundrum: should they be compared by their value , or should they be compared by their representation? Would it satisfy both? Is it even worth it being able to tell them apart? And finally, is the symmetry worth the lost of a representable value? On the other hand, if we want to keep the bijectivity between value and representation, we will lose the symmetry of negation.

Consider for example the case of the standard two's complement representation in the case of 8-bit bytes: the largest representable positive value is , while the largest in magnitude representable negative value is When computing opposites, all values between and have their opposite which is the one we would expect algebraically , but negating gives again which, while algebraically wrong, is at least consistent with modular arithmetic , where adding and actually gives 0. The conceptually simplest approach to represent signed integers, given a fixed number of digits, is to reserve one bit to indicate the sign, and leave the other n -1 bits to indicate the mantissa i.

By convention, the sign bit is usually taken to be the most significant bit, and again by convention it is taken as 0 to indicate a positive number and 1 to indicate a negative number. With this representations, two opposite values have the same representation except for the most significant bit. So, for example, assuming our usual 8-bit byte, 1 would be represented as , while -1 would be represented as For example, with an 8-bit byte the largest positive integer is , i.

As mentioned, one of the undersides of this representation is that it has both positive and negative zero, respectively represented by the and bit patterns. While the sign bit and mantissa representation is conceptually obvious, its hardware implementation is more cumbersome that it might seem at first hand, since operations need to explicitly take the operands' signs into account. A more efficient approach is offered by ones' complement representation, where negation maps to ones' complement, i. For example, with 8-bit bytes, the value 1 is as usual represented as , while -1 is represented as The range of representable numbers is the same as in the sign bit and mantissa representation, so that, for example, 8-bit bytes range from to , and we have both positive zero and negative zero As in the sign-bit case, it is possible to tell if a number is positive or negative by looking at the most-significant bit, and 0 indicates a positive number, while 1 indicates a negative number whose absolute value can then be obtained by flipping all the bits.

Sign-extending a value can be done by simply propagating the sign bit of the smaller-size representation to all the additional bits in the larger-size representation. Because of this, two's complement representation, which is simpler to implement and has no negative zero, has gained much wider adoption.

Using our usual 8-bit bytes as example, 1 will as usual be , while -1 will be For example, with 8-bit bytes the largest positive number is , represented by , whose opposite is represented by , while the largest in magnitude negative number is , represented by In two's complement representation, there is no negative zero and the only representation for 0 is given by all bits set to 0. However, as discussed earlier , this leads to a negative value whose opposite is the value itself, since the representation of largest in magnitude negative representable number is invariant by two's complement.

As in the other two representations, the most significant bit can be checked to see if a number is positive and negative. As in ones' complement case, sign-extension is done trivially by propagating the sign bit of the smaller-size value to all other bits of the larger-size value. Offset binary or biased representation is quite different from the other representations, but it has some very useful properties that have led to its adoption in a number of schemes most notably the IEEE standard for floating-point representation, where it's used to encode the exponent, and some DSP systems.

Before getting into the technical details of offset binary, we look at a possible motivation for its inception. The attentive reader will have noticed that all the previously mentioned representations of signed integers have one interesting property in common: they violate the natural ordering of the representations. Since the most significant bit is taken as the sign bit, and negative numbers have a most significant bit set to one, natural ordering by bit patterns puts them after the positive numbers, whose most significant bit is set to 0.

Additionally, in the sign bit and mantissa representation, the ordering of negative numbers is reversed with respect to the natural ordering of their representation. This means that when comparing numbers it is important to know if they are signed or unsigned and if signed, which representation to get the ordering right. The biased representation is one way and probably the most straightforward way to circumvent this.

The bias is the value that is added to the representable value to obtain the representation, and subtracted from the representation to obtain the represented value. The minimum representable number is then the opposite of the bias. Of course, the range of representable numbers doesn't change: if your data type can only represent values, you can only choose which values, as long as they are consecutive integers. For example, with 8-bit bytes values the natural choice for the bias is , leading to a representable range of integers from to , which looks distinctly similar to the one that can be expressed in two's complement representation.

Of course, such arbitrary biases are rarely supported in hardware, so operation on offset binary usually requires software implementations of even the most common operations, with a consequent performance hit. Still, assuming the hardware uses modular arithmetic, offset binary is at least trivial to implement for the basic operations. One situation in which offset binary doesn't play particularly well is that of sign-extension, which was trivial in ones' and two's complement represnetations. The biggest issue in the case of offset binary is, obviously, that the offsets in the smaller and larger data types are likely going to be different, although usually not arbitrarily different biases are often related to the size of the data type.

We're now nearing the end of our discussion on integer math on the computers. To conclude our exposition of the joys of integer math on the computers, we now discuss the beauty of integer division and the related modulus operation. Let's start by assuming that e is non-negative and o is strictly positive. The upside of this choice is that it's trivial to implement other forms of division that round up, or to the nearest number, for example , by simply adding appropriate correcting factors to the dividend. What happens when o is zero? Mathematically, division by zero is not defined although in some context where infinity is considered a valid value, it may give infinity as a result —as long as the dividend is non-zero.

In hardware, anything can happen. There's hardware that flags the error. There's hardware that produces bogus results without any chance of knowing that a division by zero happened. There's hardware that produces consistent results always zero, or the maximum representable value , flagging or not flagging the situation. Of course, this means that to write robust code it's necessary to sprinkle the code with conditionals to check that divisions will successfully complete.

If the undefined division by zero may not be considered a big issue per se, the situation is much more interesting when either of the operands of the division is a negative number. First of all, one would be led to think that at least the sign of the result would be well defined: negative if the operands have opposite sign, positive otherwise. But this is not the case for the widespread two's complement representation with modular arithmetic, where the division of two negative numbers can give a negative number: of course, we're talking about the corner case of the largest in magnitude negative number, which when divided by -1 returns itself, since its opposite is not representable.

Does the same hold true when either e or o are negative? In the third case, the equation would only be satisfied if the division rounds down, but not if the division rounds towards zero. This could lead someone to think that the best choice would be a rounding-down division with an always non-negative modulo. Integer math on a computer is simple only as far as you never think about dealing with corner cases, which you should if you want to write robust, reliable code.

With integer math, this is the minimum of what you should be aware of:. OpenCL 1. A consequence of this is that it is currently completely impossible to implement robust numerical code in OpenCL. In what follows I will explore some typical use cases where directed rounding is a powerful, sometimes essential tool for numerical analysis and scientific computing. This will be followed by a short survey of existing hardware and software support for directed rounding. The article ends with a discussion about what must, and what should, be included in OpenCL to ensure it can be used as a robust scientific programming language.

In his paper How Futile are Mindless Assessments of Roundoff in Floating-Point Computation , professor William Kahan who helped design the IEEE floating-point standard explains that, given multiple formulas that would compute the same quantity, the fastest way to determine which formulas are numerically trustworthy is to:.

Rerun each formula separately on its same input but with different directed roundings; the first one to exhibit hypersensitivity to roundoff is the first to suspect. The goal of error-analysis is not to find errors but to fix them. They have to be found first. Our failure to find errors long suspected or known to exist is too demoralizing.

We may just give up. Essential tools for the error-analysis of scientific computing code cannot be implemented in OpenCL 1. Directed rounding is an important tool to ensure that arguments to functions with limited domain are computed in such a way that the conditions are respected numerically when they would be analytically. To clarify, in this section I'm talking about correctly rounding the argument of a function, not its result. When the argument to such a function is computed through an expression particularly if such an expression is ill-conditioned whose result is close to one of the limits of the domain, the lack of correct rounding can cause the argument to be evaluated just outside of the domain instead of just inside which would be the analytically correct answer.

This would cause the result of the function to be Not-a-Number instead of the correct ly rounded answer. A discussion on the importance of correct rounding can again be found in Kahan's works, see e. Why we needed a floating-point standard. Robust coding of analytically correct formulas is impossible to achieve in OpenCL 1. A typical example of a numerical method for which support for directed rounding rounding modes in different parts of the computation is needed is Interval Analysis IA.

Similar arguments hold for other forms of self-verified computing as well. In OpenCL 1. From the rationales presented so far, one could deduce that directed rounding is essentially associated with the stability and robustness of numerical code. There are however other cases where directed rounding can be used, which are not explicitly associated with things such as roundoff errors and error bound estimation. The motion of these particles is typically determined by the interaction between the particle and its neighbors within a given influence sphere.

Checking for proximity between two particles is done by computing the length of the relative distance vector differences of positions , and the same distance is often used in the actual computation of the influence between particles. As usual, to avoid bias, both the relative distance vector and its length should be computed with the default round-to-nearest-even rounding mode for normal operations. Due to the mesh-less nature of the method, neighborhoods may change at every time-step, requiring a rebuild of the neighbors list. To improve performance, this can be avoided by rebuilding the neighbors list at a lower frequency e.

When such a strategy is adopted, neighbors need to be re-checked for actual proximity during normal operations, so that, for maximum efficiency, a delicate balance must be found between the reduced frequency and the increased number of potential neighbors caused by the enlarged influence radius. One way to improve efficiency in this sense is to round towards zero the computation of the relative distance vector and its length during neighbors list construction: this maximizes the impact of the enlarged influence radius by including potential neighbors which are within one or two ULPs.

This allows the use of very tight bounds on how much to enlarge the influence radius, without loss of correctness in the simulations. All in all, OpenCL since 1. In its present state, OpenCL 1. This effectively prevents robust numerical code to be implemented and analyzed in OpenCL. While I can understand that core support for directed rounding in OpenCL is a bit of a stretch, considering the wide range of hardware that support the specification, I believe that the standard should provide an official extension to re introduce support for it.

Ideally potentially through a different extension , it would be nice to also have explicit support for instruction-level rounding mode selection independently from the current rounding mode, with intrinsics similar to the ones that OpenCL defines already for the conversion functions. On supporting hardware, this would make it possible to implement even more efficient, yet still robust numerical code needing different rounding modes for separate subexpression.

When it comes to the OpenCL programming model, it's important to specify the scope of application of state changes, of which the rounding mode is one. Given the use cases discussed above, we could say that the minimum requirement would be for OpenCL to support changing the rounding mode during kernel execution , and for the whole launch grid to a value known at kernel compile time. Da ben 26 anni si rinnova questa importantissima manifestazione. Tra gli altri suoi lavori, ricordiamo: le miniserie tv La freccia nera , Caravaggio VA Nella soap Incantesimo, quando abbraccio mio padre e scoppio in lacrime R Il regista con il quale vorresti lavorare?

Lei ama il cinema sopra ogni cosa, tanto che lo ha studiato in tutte le sue forme, dalla critica alla vera recitazione. Sono in studio con Steve Della Casa e parliamo di film recenti e non, uno spasso per me! Mi ha colpito la perfezione narrativa di The Millionaire di Danny Boyle. Boyle sa oscillare benissimo tra il dramma e la commedia in piena tradizione brechtiana. Ti ci ritrovi?

Penso che Ozon ami dirigere le donne. Lo spero, me lo auguro. Due anni fa, quando ho preso parte alla serie televisiva Boris dove ho indossato i panni di una persona davvero lontana dalle mie attitudini: una giovane raccomandata, ambiziosa, pronta a ogni compromesso. Ci siamo divertiti come dei matti, ridevamo sul set fino a piangere. Potremmo paragonarlo alla vita che scorre dentro un Circo. Sceglierei proprio Anderson se dovessi orientarmi su un regista straniero. Paul, puoi chiamarmi quando vuoi! Fotografia Matteo Montanari.

Moda Marcelo Burlon. Grooming Giorgia Panbianchi. Photo assistant Paolo Simi. Post Production Numerique. Shorts Diesel. Fotografia Fred Jacobs. Assistente Moda Nicholas Galletti. Pop art. Los Angeles. E ovviamente la strada, fonte di ispirazione e banco di prova. E che nel compie 15 anni, da festeggiare con una serie di iniziative speciali.

Oltre a una limited edition pezzi di occhiali realizzati da Safilo. La domanda su suo padre Renzo Rosso, fondatore del gruppo veneto, sorge spontanea. Chiaramente ha i suoi pro e i suoi contro: aiuta e penalizza. Abbiamo costruito eventi ed exhibition. Grafica e fotografia sono una grande fonte di ispirazione per le nostre collezioni. Mi piace la grafica. Sono attratto da tutto quello che ha un forte impatto cromatico. Dalla Pop art ad artisti come Thomas Campbell.

Colleziono T-shirt usate, vintage, ironiche. Con paesaggi di natura e automobili. Non hanno un particolare valore commerciale, ma il loro impatto grafico mi piace molto. Come dimensione non credo che 55DSL voglia essere una grande azienda. Amo le collaborazioni e abbiamo diversi progetti in ballo.

Stiamo lavorando sulle calzature, che dovrebbero essere protagoniste della nostra prossima iniziativa. Un progetto in ambito tecnologico? Sicuramente in Giappone, dove mi piacerebbe andare a vivere per un anno. Una visione futurista eppure amante del classico: un mix interessante.

Le scarpe Adidas e Nike, per il prodotto. Amo soprattutto le collezioni vintage, le riedizioni di modelli storici.

## JISCMail - ITALIAN-STUDIES Archives

Le Zx di Adidas sono tra le mie preferite. Mi piace anche il lavoro di Undercover e di nomi australiani come Ksubi, Perks and Mini e, sul fronte comunicazione, Insight Mi piace la moda per strada, la grafica, quello che le persone sono. Mi fa un piacere enorme vedere qualcuno vestito da noi, soprattutto qualcuno di interessante.

- Individual Offers;
- Wiktionary:Frequency lists/Italian50k;
- The Sound and the Furry!
- Der verbrannte Koffer: Eine jüdische Familie in Berlin (German Edition).
- Le dernier paquet (French Edition)?

Sono esecutivista, non disegno, non ne sono capace. Ma mi piace il prodotto, averlo tra le mani. Stupendo il materiale video, frutto di 12 anni di riprese fatte e montate con stile da Steven Sebrig. Le rivoluzioni di Patti Smith di Ruggero Marinello. Per fortuna ci pensa, dopo due anni di assenza, questa manifestazione che propone sempre un cast davvero curioso ma funzionale per tutti.

Prendete nota! Oppure una languida ballata di Al Green. Nei negozi a 10 euro. Non male. Da nove anni a Pasqua, Torino si trasforma nella capitale Italiana del tango. Nuovo nome per un bel festival che riesce a coinvolgere migliaia di tangueros: International Tango Torino Festival, grandi ospiti, stage a tutti i livelli e spettacolari esibizioni. E gli autori lo raccontano per epoche dagli Anni 30 ad oggi e generi. Un libro divertente e intelligente. Il segreto? Ora i Ministri partiranno per un lungo tour e vi invitiamo a cercare, intanto, di beccare un loro concerto, ne vale la pena.

Pensa ai templi sacri e alle ardite architetture di oggi, spesso realizzate grazie a un progetto nato dalla mente di un europeo. Le nuove generazioni sono cresciute con una particolare attenzione al visivo. Noi abbiamo un look forte per essere ricordati da chi ascolta la nostra musica. Influenza la nostra strada, il nostro divenire. Empire Of The Sun realizzeranno un road movie iperbolico, spero ancora di avere qualche illuminazione alla Jodorowsky! Dobbiamo vedere ancora molti posti e paesi prima di completare il film. Prima che si penta del grande passo ride, ndr!

Pare una classica produzione alla Timbaland Hai letto i testi? Sei il primo giornalista non inglese che mi cita uno stralcio delle nostre liriche! E poi mi piace che tu abbia colto questa frase, davvero molto Empire Of The Sun! Comunque io ci sono molto affezionato. Sono curioso di sapere come saranno i vostri concerti. Siamo solo agli inizi, ne vedrete delle belle!

Prima di tutto penso che sia la vita stessa che ci condiziona, non pensiamo solo alla nostra carriera ma anche a vivere. Non facciamo niente di particolare, cerchiamo solo di trovare il giusto equilibrio tra vita professionale e vita privata. Non abbiamo dedicato il nostro tempo solo alla realizzazione di questo disco. Giusto un gioco.

- Essays on American History.
- The WAH Factor;
- Something Fierce [The Underground 1] (Siren Publishing Classic)!
- The War of the Wolves.
- Caesars bellum gallicum: Pontifex und Propagandist (German Edition).
- nbemldk.tk Ebooks and Manuals.
- Stand Strong: You Can Overcome Bullying (and Other Stuff That Keeps You Down).

Definisce questa canzone che nasce da stimoli musicali molto lontani dal nostro territorio abituale. Miss It So Much voleva essere un brano romantico. In generale quando ci si pone in una situazione romantica noi tutti utilizziamo la forma del ricordo per amplificare le nostre emozioni. La classe e la voce sexy di Lykke Li ben si adattavano a questa nostra richiesta. E ci stava dicendo cosa fare della nostra carriera in Albania! Abbiamo costruito la canzone come si realizza un racconto. Lei adesso ha la sua carriera da solista, ma quando ha voglia ride, ndr viene in studio da noi.

Infatti il nostro precedente disco The Understanding si concludeva con una breve canzone intitolata Tristesse Globale. E adesso ce ne stiamo seduti qua a ridere della sfortuna altrui! Missione: raggiungere lo stato di motivazione necessario per affrontare una serata in un club quando ci si trova in giro per il mondo. Solo per voi, ogni mese, una selezione di dischi, i migliori, che muovono aere e sederi qui a Rodeo.

Un gioiellino di intelligente leggerezza per prepararsi alla prossima primavera. Ironiche e taglienti, le liriche,. Ancora loro? Ma non erano caduti in disgrazia?! Scetticismo legittimo, dopo un ultimo album da dimenticare Always Outnumbered, Never Outgunned, e un Best of con relativo tour che sapeva di epitaffio. Che si fa sorpresa quando Invaders Must Die finisce nel lettore. Un vero talento.

Proprio come fanno i bravi dj. Didattico e coraggioso. Canzoni da appartamento, con le finestre aperte verso la luce del giorno. Si svolge in un futuro prossimo, parallelo e realissimo il nuovo romanzo di Alessandro Zaccuri, Infinita notte Mondadori, pp. Una celebrazione del vuoto. Una sintonia impeccabile. Traghettata dalla rete alla carta stampata, in versione 3.

Fine del postmoderno, oggetti narrativi non identificati, nuova epica, pop culture, prodotti transmediali. Propaganda o mutazione del reale? Due racconti che valorizzano al meglio la scrittura-coltello, affilata, delicata, devastante, della scrittrice di Okayama. Enzo Mansueto. La televisione, ormai, fa parte del reale. E nel momento in cui ho deciso di ambientare un libro a Sanremo nella settimana del Festival, non potevo fare a meno di considerare una presenza tanto ingombrante.

Dal punto di vista strettamente tecnico, la televisione ha migliorato il procedimento. Nelle zone liberate del ghetto i giovani iniziarono a sfidarsi inventando uno stile nuovo nella danza, nella musica e nella spray art che pose le premesse per la nascita e la diffusione della cultura hip-hop nel mondo.

Renegades of Funk Edizioni Agenzia X, pp. Qui un breve estratto in esclusiva per Rodeo. Vidi un treno e ci feci una tag e, prima che me ne accorgessi, avevo iniziato un movimento di massa. I graffiti nacquero per sfuggire alla disperazione della vita quotidiana. Ogni artista portava nelle sue opere il proprio background e stile. Andavo in giro con un amico e facevamo tag. Prima di rendermene conto stavo dipingendo interi vagoni.

Ormai sapevano che andavamo nei depositi e sulle sopraelevate per dipingere. Poi iniziarono a complicarsi le lettere e le forme. Era meraviglioso. Una bomba adrenalinica. Capisci cosa voglio dire? Sono sempre stato il migliore, non scherzo. Ho sempre spaccato il culo a chiunque. Sono stato un precursore e sono sempre rimasto avanti. La mia arte rappresenta il mio modo di vivere, una tecnica selvaggia. Devi trovare un proposito nella vita e una volta trovato cerchi di svilupparlo al massimo. Intendo dire dedicarsi a qualcosa per ore senza la minima fatica.

By Alessandra Sanguinetti. Come in tutte le relazioni, esiste un tacito accordo sui propri confini, e ognuno dovrebbe rispettare gli spazi intimi altrui. Come hai colto gli attimi per te perfetti? Non esistono i momenti perfetti. Quanto ti ha coinvolto umanamente questo lavoro? Solitamente, non conosco le mie sensazioni fino al momento in cui scatto. Esistono storie da raccontare in tutto il mondo. Because things are not that simple once you move beyond unsigned integers.

But before we do that, I would like to point out that things aren't that simple even if we're just sticking to non-negative integers. Let's stick to just addition and multiplication at first, which are the simplest and best defined operations on integers. Of course, the trouble is that if you are adding or multiplying two numbers between 0 and , the result might be bigger than In general, if you are operating on n -bits numbers, the result might not be representable in n bits.

So what does the computer do when this kind of overflow happens? Most programmers will now chime in and say: well duh , it wraps! However, this is not the only possibility.

### Other courses

For example, specialized DSP Digital Signal Processing hardware normally operates with saturation arithmetic: overflowing values are clamped to the maximum representable value. In particular, in saturation arithmetic algebraic addition is not associative, and multiplication does not distribute over algebraic addition. By contrast, with modular arithmetic, both expressions in each case give the correct result.

So, when the final result is representable , modular arithmetic gives the correct result in the case of a static sequence of operations. However, when the final result is not representable, saturation arithmetic returns values that are closer to the correct one than modular arithmetic: is clamped to , in contrast to the severely underestimated Being as close as possible to the correct results is an extremely important property not just for the final result, but also for intermediate results, particularly in the cases where the sequence of operations is not static, but depends on the magnitude of the values for example, software implementations of low- or high-pass filters.

In these applications of which DSP, be it audio, video or image processing, is probably the most important one both modular and saturation arithmetic might give the wrong result, but the modular result will usually be significantly worse than that obtained by saturation. For example, modular arithmetic might miscompute a frequency of Hz as 44Hz instead of Hz, and with a threshold of Hz this would lead to attenuation of a signal that should have passed unchanged, or conversely.

Amplifying an audio signal beyond the representable values could result in silence with modular arithmetic, but it will just produce the loudest possible sound with saturation. We mentioned that promotion of unsigned values to wider data types is trivial. What about demotion? For example, knowing that your original values are stored as 8-bit bytes and that the final result has to be again stored as an 8-bit byte, a programmer might consider operating with bit or wider words to try and prevent overflow during computations.

However, when the final result has to be demoted again to an 8-bit byte, a choice has to be made, again: should we just discard the higher bits which is what modular arithmetic does , or return the highest representable value when any higher bits are set which is what saturation arithmetic does? Of course, the real problem in the examples presented in the previous section is that the data type used e. One of the most important things programmers should consider, maybe the most important, when discussing doing math on the computer, is precisely choosing the correct data type.

For integers, this means choosing a data type that can represent correctly not only the starting values and the final results, but also the intermediate values. If your data fits in 8 bits, then you want to use at least 16 bits. If it fits in 16 bits but not 8 , then you want to use at least 32, and so on. Having a good understanding of the possible behaviors in case of overflow is extremely important to write robust code, but the main point is that you should not overflow.

In case you are still of the opinion that integer math is easy, don't worry. We still haven't gotten into the best part, which is how to deal with relative numbers , or, as the layman would call them, signed integers. However, let's be honest here, non-negative integers are pretty limiting. We would at least like to have the possibility to also specify negative numbers.

And here the fun starts. Although there is no official universal standard for the representation of relative numbers signed integers on computers, there is undoubtedly a dominating convention, which is the one programmers are nowadays used to: two's complement.

However, this is just one of many no less than four possible representations:. One of the issues with the representation of signed integers in binary computers is that binary words can always represent an even number of values, but a symmetrical amount of positive and negative integers, plus the value 0, is odd. Hence, when choosing the representation, one has to choose between either:. Of the four signed number representations enumerated above, the sign bit and ones' complement representations have a signed zero, but each non-zero number has a representable opposite, while two's complement and bias only have one value for zero, but have at least one non-zero number that has no representable opposite.

Offset binary is actually very generic and can have significant asymmetries in the ranges of representable numbers. The biggest issue with having a negative zero is that it violates a commonly held assumption, which is that there is a bijective correspondence between representable numerical values and their representation, since both positive and negative 0 have the same numerical value 0 but have distinct bit patterns. Where this presents the biggest issue is in the comparison of two words.

When comparing words for equality, we are now posed a conundrum: should they be compared by their value , or should they be compared by their representation? Would it satisfy both? Is it even worth it being able to tell them apart? And finally, is the symmetry worth the lost of a representable value? On the other hand, if we want to keep the bijectivity between value and representation, we will lose the symmetry of negation.

Consider for example the case of the standard two's complement representation in the case of 8-bit bytes: the largest representable positive value is , while the largest in magnitude representable negative value is When computing opposites, all values between and have their opposite which is the one we would expect algebraically , but negating gives again which, while algebraically wrong, is at least consistent with modular arithmetic , where adding and actually gives 0.

The conceptually simplest approach to represent signed integers, given a fixed number of digits, is to reserve one bit to indicate the sign, and leave the other n -1 bits to indicate the mantissa i. By convention, the sign bit is usually taken to be the most significant bit, and again by convention it is taken as 0 to indicate a positive number and 1 to indicate a negative number.

With this representations, two opposite values have the same representation except for the most significant bit. So, for example, assuming our usual 8-bit byte, 1 would be represented as , while -1 would be represented as For example, with an 8-bit byte the largest positive integer is , i. As mentioned, one of the undersides of this representation is that it has both positive and negative zero, respectively represented by the and bit patterns. While the sign bit and mantissa representation is conceptually obvious, its hardware implementation is more cumbersome that it might seem at first hand, since operations need to explicitly take the operands' signs into account.

A more efficient approach is offered by ones' complement representation, where negation maps to ones' complement, i. For example, with 8-bit bytes, the value 1 is as usual represented as , while -1 is represented as The range of representable numbers is the same as in the sign bit and mantissa representation, so that, for example, 8-bit bytes range from to , and we have both positive zero and negative zero As in the sign-bit case, it is possible to tell if a number is positive or negative by looking at the most-significant bit, and 0 indicates a positive number, while 1 indicates a negative number whose absolute value can then be obtained by flipping all the bits.

Sign-extending a value can be done by simply propagating the sign bit of the smaller-size representation to all the additional bits in the larger-size representation. Because of this, two's complement representation, which is simpler to implement and has no negative zero, has gained much wider adoption. Using our usual 8-bit bytes as example, 1 will as usual be , while -1 will be For example, with 8-bit bytes the largest positive number is , represented by , whose opposite is represented by , while the largest in magnitude negative number is , represented by In two's complement representation, there is no negative zero and the only representation for 0 is given by all bits set to 0.

However, as discussed earlier , this leads to a negative value whose opposite is the value itself, since the representation of largest in magnitude negative representable number is invariant by two's complement. As in the other two representations, the most significant bit can be checked to see if a number is positive and negative. As in ones' complement case, sign-extension is done trivially by propagating the sign bit of the smaller-size value to all other bits of the larger-size value. Offset binary or biased representation is quite different from the other representations, but it has some very useful properties that have led to its adoption in a number of schemes most notably the IEEE standard for floating-point representation, where it's used to encode the exponent, and some DSP systems.

Before getting into the technical details of offset binary, we look at a possible motivation for its inception. The attentive reader will have noticed that all the previously mentioned representations of signed integers have one interesting property in common: they violate the natural ordering of the representations. Since the most significant bit is taken as the sign bit, and negative numbers have a most significant bit set to one, natural ordering by bit patterns puts them after the positive numbers, whose most significant bit is set to 0.

Additionally, in the sign bit and mantissa representation, the ordering of negative numbers is reversed with respect to the natural ordering of their representation. This means that when comparing numbers it is important to know if they are signed or unsigned and if signed, which representation to get the ordering right. The biased representation is one way and probably the most straightforward way to circumvent this. The bias is the value that is added to the representable value to obtain the representation, and subtracted from the representation to obtain the represented value. The minimum representable number is then the opposite of the bias.

Of course, the range of representable numbers doesn't change: if your data type can only represent values, you can only choose which values, as long as they are consecutive integers. For example, with 8-bit bytes values the natural choice for the bias is , leading to a representable range of integers from to , which looks distinctly similar to the one that can be expressed in two's complement representation. Of course, such arbitrary biases are rarely supported in hardware, so operation on offset binary usually requires software implementations of even the most common operations, with a consequent performance hit.

Still, assuming the hardware uses modular arithmetic, offset binary is at least trivial to implement for the basic operations. One situation in which offset binary doesn't play particularly well is that of sign-extension, which was trivial in ones' and two's complement represnetations.

The biggest issue in the case of offset binary is, obviously, that the offsets in the smaller and larger data types are likely going to be different, although usually not arbitrarily different biases are often related to the size of the data type. We're now nearing the end of our discussion on integer math on the computers. To conclude our exposition of the joys of integer math on the computers, we now discuss the beauty of integer division and the related modulus operation. Let's start by assuming that e is non-negative and o is strictly positive. The upside of this choice is that it's trivial to implement other forms of division that round up, or to the nearest number, for example , by simply adding appropriate correcting factors to the dividend.

What happens when o is zero? Mathematically, division by zero is not defined although in some context where infinity is considered a valid value, it may give infinity as a result —as long as the dividend is non-zero. In hardware, anything can happen. There's hardware that flags the error. There's hardware that produces bogus results without any chance of knowing that a division by zero happened.

There's hardware that produces consistent results always zero, or the maximum representable value , flagging or not flagging the situation. Of course, this means that to write robust code it's necessary to sprinkle the code with conditionals to check that divisions will successfully complete.

If the undefined division by zero may not be considered a big issue per se, the situation is much more interesting when either of the operands of the division is a negative number. First of all, one would be led to think that at least the sign of the result would be well defined: negative if the operands have opposite sign, positive otherwise.

But this is not the case for the widespread two's complement representation with modular arithmetic, where the division of two negative numbers can give a negative number: of course, we're talking about the corner case of the largest in magnitude negative number, which when divided by -1 returns itself, since its opposite is not representable.

Does the same hold true when either e or o are negative? In the third case, the equation would only be satisfied if the division rounds down, but not if the division rounds towards zero. This could lead someone to think that the best choice would be a rounding-down division with an always non-negative modulo. Integer math on a computer is simple only as far as you never think about dealing with corner cases, which you should if you want to write robust, reliable code.

With integer math, this is the minimum of what you should be aware of:. OpenCL 1. A consequence of this is that it is currently completely impossible to implement robust numerical code in OpenCL. In what follows I will explore some typical use cases where directed rounding is a powerful, sometimes essential tool for numerical analysis and scientific computing.

This will be followed by a short survey of existing hardware and software support for directed rounding. The article ends with a discussion about what must, and what should, be included in OpenCL to ensure it can be used as a robust scientific programming language. In his paper How Futile are Mindless Assessments of Roundoff in Floating-Point Computation , professor William Kahan who helped design the IEEE floating-point standard explains that, given multiple formulas that would compute the same quantity, the fastest way to determine which formulas are numerically trustworthy is to:.

Rerun each formula separately on its same input but with different directed roundings; the first one to exhibit hypersensitivity to roundoff is the first to suspect. The goal of error-analysis is not to find errors but to fix them. They have to be found first. Our failure to find errors long suspected or known to exist is too demoralizing. We may just give up. Essential tools for the error-analysis of scientific computing code cannot be implemented in OpenCL 1. Directed rounding is an important tool to ensure that arguments to functions with limited domain are computed in such a way that the conditions are respected numerically when they would be analytically.

To clarify, in this section I'm talking about correctly rounding the argument of a function, not its result. When the argument to such a function is computed through an expression particularly if such an expression is ill-conditioned whose result is close to one of the limits of the domain, the lack of correct rounding can cause the argument to be evaluated just outside of the domain instead of just inside which would be the analytically correct answer. This would cause the result of the function to be Not-a-Number instead of the correct ly rounded answer.

A discussion on the importance of correct rounding can again be found in Kahan's works, see e. Why we needed a floating-point standard. Robust coding of analytically correct formulas is impossible to achieve in OpenCL 1. A typical example of a numerical method for which support for directed rounding rounding modes in different parts of the computation is needed is Interval Analysis IA. Similar arguments hold for other forms of self-verified computing as well. In OpenCL 1. From the rationales presented so far, one could deduce that directed rounding is essentially associated with the stability and robustness of numerical code.

There are however other cases where directed rounding can be used, which are not explicitly associated with things such as roundoff errors and error bound estimation. The motion of these particles is typically determined by the interaction between the particle and its neighbors within a given influence sphere. Checking for proximity between two particles is done by computing the length of the relative distance vector differences of positions , and the same distance is often used in the actual computation of the influence between particles.

As usual, to avoid bias, both the relative distance vector and its length should be computed with the default round-to-nearest-even rounding mode for normal operations. Due to the mesh-less nature of the method, neighborhoods may change at every time-step, requiring a rebuild of the neighbors list. To improve performance, this can be avoided by rebuilding the neighbors list at a lower frequency e.

When such a strategy is adopted, neighbors need to be re-checked for actual proximity during normal operations, so that, for maximum efficiency, a delicate balance must be found between the reduced frequency and the increased number of potential neighbors caused by the enlarged influence radius. One way to improve efficiency in this sense is to round towards zero the computation of the relative distance vector and its length during neighbors list construction: this maximizes the impact of the enlarged influence radius by including potential neighbors which are within one or two ULPs.

This allows the use of very tight bounds on how much to enlarge the influence radius, without loss of correctness in the simulations. All in all, OpenCL since 1. In its present state, OpenCL 1. This effectively prevents robust numerical code to be implemented and analyzed in OpenCL. While I can understand that core support for directed rounding in OpenCL is a bit of a stretch, considering the wide range of hardware that support the specification, I believe that the standard should provide an official extension to re introduce support for it.

Ideally potentially through a different extension , it would be nice to also have explicit support for instruction-level rounding mode selection independently from the current rounding mode, with intrinsics similar to the ones that OpenCL defines already for the conversion functions. On supporting hardware, this would make it possible to implement even more efficient, yet still robust numerical code needing different rounding modes for separate subexpression.

When it comes to the OpenCL programming model, it's important to specify the scope of application of state changes, of which the rounding mode is one. Given the use cases discussed above, we could say that the minimum requirement would be for OpenCL to support changing the rounding mode during kernel execution , and for the whole launch grid to a value known at kernel compile time. So, it should be possible when the appropriate extension is supported and enabled to change rounding mode half-way through a kernel. The new:. The minimum supported granularity would thus be the whole launch grid, as long as the rounding mode can be changed dynamically during kernel execution, to any value known at compile time.

Of course, a finer granularity and a more relaxed i. These may be made optional, and the hardware capability in this regard could be queried through appropriate device properties. For example, considering the standard execution model for OpenCL, with work-groups mapped to compute units, it might make sense to support a granularity at the work-group level. This would be a nice addition, since it would allow e.

But it's not strictly necessary. Mio figlio ha sempre voluto fare l'ingegnere. Anche il pompiere anzi il capo dei pompieri , ma soprattutto l'ingegnere. Mio figlio adesso ha sei anni. Accompagnandolo a scuola abbiamo visto il pennacchio sui crateri centrali, e gli ho detto che era un peccato che non ci fosse il bottone rosso. Non posso dire che questo sito riceva grandi visite. Nel quotidiano, saranno una manciata, nemmeno una decina.

Quando pubblico qualcosa di nuovo generalmente si arriva intorno alla ventina, giusto nel giorno della pubblicazione o in quello immediatamente successivo, generalmente da chi mi segue via feed o via FriendFeed. Bene , per intenderci, significa sull'ordine delle due, tremila visite al giorno, per un paio di giorni. L'articolo sembra abbia persino raggiunto Facebook , anche se vai a capire come si faccia a risalire a chi l'ha ivi condiviso.

Il popolo italiano non ha mai eletto un governo che sia uno nella storia d'Italia. Il governo viene formato dal presidente del Consiglio dei Ministri, che viene nominato dal Presidente della Repubblica. Reitero: il governo non viene scelto dal popolo , non secondo la nostra attuale Costituzione. E si scopre subito che a farlo sono stupore! Sono quelli che hanno ridotto il potere della democrazia rappresentativa eliminando il voto di preferenza. Ma sono anche quelli che vogliono ridurre il numero dei parlamentari. Sono quelli che vogliono eliminare il bicameralismo perfetto.

Sono quelli che vogliono istituire il vincolo di mandato. Alla scelta del titolo si aggiunge poi anche la breve che lo accompagna come sottotitolo nella pagina di indice. Seguono a ruota, sempre in quegli appunti, esempi di come da premesse false si possa giungere a conclusioni vere, con ragionamenti logici corretti. E la gente porta quella pagina a supporto della erronea tesi opposta. Il 10 marzo , intorno alle , Beppe Grillo ha pubblicato su Twitter queste parole :.

Ricordiamo a tal proposito una tavola del Girighiz di Enzo Lunari del lontano Per potersi ritirare dalla politica, Grillo dovrebbe prima entrarci , in politica. Poco importa che siano vere o meno. Grillo sostiene di essere soltanto il portavoce del Movimento 5 Stelle, di esserne il megafono. Non prende decisioni di propria iniziativa, non esprime le proprie idee.

Possiamo essere d'accordo con Grillo su quanto lui sostiene circa il suo ruolo nel M5S? In questo senso, quando Grillo esprime va le sue idee esprime va implicitamente anche quelle del Movimento stesso. Per finire, sappiamo che lo stesso nome e simbolo del M5S sono marchi registrati di Beppe Grillo. In quest'ottica, ha senso che Grillo possa voler uscire di scena nel momento in cui il MoVimento dovesse arrivare ad esprimere in pratica un pensiero diverso dal suo.

Ci sarebbe quasi da sperare che lo prendano in parola e lo mettano alla prova. Verrebbe quasi da pensare che questa ostruzione verso il PD sia intenzionale proprio per spingere il PD al suicidio, cosa in cui Grillo spera ardentemente, per poter ottenere alle prossime elezioni una vittoria schiacciante. Ma veniamo ora all'altra parte del Movimento, quella che invece ritene che un appoggio esterno ad un governo Bersani sia una cosa da prendere in considerazione.

Certo sarebbe interessante vedere quanti di quei commenti vengano da appartenenti al Movimento e quanti siano invece esterni al Movimento stesso tra coloro che si dichiarano ve ne sono da un lato come dall'altro. Questa la tesi grillina. Personalmente, non mi trovo d'accordo con questa accusa di ipocrisia rivolta a Grillo, pur trovando generalmente il suo atteggiamento puerile e superficiale.

Il Movimento dovrebbe forse non dialogare con chi accusa Grillo? Ma non sorprende che questa differenza tra Grillo e il Movimento sia qualcosa che Grillo stesso cerchi in tutti i modi di offuscare e far dimenticare. Deve, invece, cercare di assumere un ruolo attivo in ogni occasione che gli venga presentata. E per inciso, queste considerazioni prescindono dal particolare attuale equilibrio in Parlamento: se anche la coalizione di Bersani avesse avuto la maggioranza anche al Senato, e non avesse quindi avuto bisogno di aprire al M5S, le considerazioni si applicherebbero tali e quali.

Parliamone a caldo, prima che le grandi decisioni siano prese. Scrivere un articolo serio, argomentato e tutto servirebbe solo a far andare avanti le cose prima che l'articolo stesso sia finito. Ricordiamo a tal proposito i risultati di Berlusconi nel , sempre per la suddetta questione della memoria storica. Il fatto che il suo Movimento abbia anche solide basi qualunquiste, populiste e non tanto cripto fascistoidi aiuta anche a far presa, soprattutto tra i giovani.

Ovviamente, a queste risicatissime maggioranze numeriche nazionali non corrisponde una identica distribuzione dei seggi, grazie all'intervento dei premi di maggioranza in entrambe le Camere, e della natura regionale delle ripartizioni al Senato. Direi piuttosto auspicata , e non dovrebbe sorprendere che sia quella su cui lui speri: la soluzione sarebbe infatti il tipo di suicidio politico soprattutto per il PD che darebbe al suo Movimento la spinta necessaria per sperare di vincere, da solo, le prossime elezioni. Anche per il PDL questa sarebbe certamente la soluzione migliore, in quanto l'unica che potrebbe dare loro un ruolo attivo, non limitato a mettere i bastoni tra le ruote ad ogni iniziativa degli altri.

L'unico a perderci, con questa soluzione, sarebbe il PD. Una soluzione del genere rischierebbe di essere utile quasi a tutti: al PD, al M5S, ma soprattutto agli italiani.

- Outside the State?.
- Navigation menu.
- Bridge Lessons: Defence.
- The Handbook of Neuropsychiatric Biomarkers, Endophenotypes and Genes: 4.
- The Politics of Everyday Life in Fascist Italy?

Proprio per questo non mi illudo sul fatto che possa essere seriamente presa in considerazione. Per finire, qualche riflessione sul PD e su Bersani. Sinceramente, mi permetto di dubitare fortemente che il PD avrebbe potuto fare di meglio in queste elezioni, chiunque avesse avuto a capo. No, non credo proprio che il problema del PD sia Bersani. Ho provato. Giuro, ho provato.

Mi sono trattenuto, mi sono dedicato ad altro, mi sono concentrato su tutte le altre cose che avevo da scrivere. Ma oggi non ho resistito. Ma per queste elezioni febbraio , ho la sensazione che si sia, in qualche modo, toccato il fondo. I due fattori in questione sono quello della matita e quella della foto , che vado qui a presentare per poi discuterne i problemi. Che elettori grillini e rappresentanti del movimento, persi nel loro entusiasmo e nelle loro paranoie complottiste, non siano a conoscenza di questo risvolto, non sarebbe tanto un problema: alla fin fine, significherebbe semplicemente qualche voto in meno per il Movimento.

As visually discussed here , a set of equations has recently been popping up as graffiti in Belgium. The equations define five functions of one variable, namely:. As mathematicians, we can take this a step further and define an Anarchist curve , by finding the implicit form of the plot of each of the function, and then bringing them together. In this case, f , g together define the circle, with equation. We first rewrite j in a nice form as. If we then multiply this for the left-hand side of the implicit equation for the circle, we have the Anarchist curve.

If the year is to be considered, better dates can be chosen, with the following argument. So, March 7th resp. July 3th , depending on date notation are both better approximations than March 14th resp. There are so many other interesting numbers to look into! The proposed Tau Day in the manifesto linked above is thus on June 28th, following the North American tradition. We obviously disagree, and would rather look for a fractional date choice. Sadly, in this case the fractional approximation is worse than the truncation 0.

Similarly, we can go looking for the best date for Napier's number e , for which we can choose 19 7 , accurate to 0. For anybody interested in exploring rational approximating dates, I've also cooked up a quick'n'dirty Ruby script that does the finding for you. Se le formule non hanno senso nel tuo browser, segnala il problema agli sviluppatori del browser , o passa ad un browser che supporti questi standard. Nel procedimento, ci serviremo di due ulteriori osservazioni.

Vi sono forti indizi che le nostre successioni F e h siano successioni di Beatty complementari. Per concludere che le successioni di Beatty con questi generatori sono effettivamente quelle che cercavamo rimane ora da verificare che esse soddisfano i due criteri 5.

Queste due successioni sono meglio note come la successione inferiore F e superiore h di Wythoff. Queste due successioni sono state scoperte appunto dall'eponimo matematico nello studiare un famoso problema di teoria dei giochi che prende vari nomi problema di Wythoff, il gioco delle scatole dei biscotti, etc. Il gioco ha la seguente forma: ci sono due scatole di biscotti, e due giocatori si alternano scegliendo quanti biscotti vogliono da una sola delle due scatole, oppure lo stesso numero di biscotti da entrambe le scatole.

Vince il giocatore che prende l'ultimo biscotto. E questi sono solo due dei sistemi che sul nostro pianeta sono o sono stati usati. Propongo un esercizio di riscaldamento. Supponiamo di sapere che gli alieni scrivano, come noi, da sinistra verso destra e poi dall'alto verso il basso.

Supponiamo anche di aver identificato gli otto simboli con cui vengono scritti i numeri, e che trascriveremo con le prime otto lettere dell'alfabeto latino: A B C D E F G H. Siamo in grado di cominciare a interpretare i numeri e capire di quale operazione si tratta semplicemente da questo? Come nei sistemi di numerazione biiettiva, le otto cifre rappresentano i numeri da 1 a 8, e non si fa quindi uso dello zero. In aggiunta, gli alieni scrivono i numeri nella direzione di scrittura, partendo dalle cifre meno significative, e chiudendo con la cifra data dal numero modulo 7.

L'uso della cifra supplementare facilita la determinazione di eventuali errori. Nel nostro usuale sistema numerale, questo punto si applicherebbe aggiungendo alla normale sequenza di cifre il valore modulo 9 dello stesso numero. Volendo separare con una barra verticale il numero dalla cifra di controllo, scriveremmo 12 3 o 15 6, o 2 adottando, a parte la base, il sistema alieno, avremmo invece , , rispettivamente. A tal fine, calcoliamo:. Tre coppie decidono di prendere per cena pizza da asporto. Arrivati alla pizzeria, ordinano rispettivamente due margherite senza olio, una caprese e una bresaola, una caprese e una vulcano.

Quando le pizze sono pronte, la commessa le fornisce impilate in ordine ignoto. Ciascuno dei tre cavalieri prende due delle pizze per distribuire equamente il carico durante il trasporto verso casa. Le permutazioni in questione possono anche essere enumerate per esteso:. Quante sono invece le permutazioni distinte possibili 1?

Se le pizze fossero tutte diverse, si avrebbero 6! Sono quindi due insiemi con intersezione non vuota. Sia X un insieme. Tra questi ricordiamo:. Veniamo ora alla nostra questione: prendiamo l'insieme X delle persone che lavorano, e definiamo su questo insieme due relazioni. Se x , y sono persone ed x lavora con y , scriveremo x C y. Se x lavora per y , scriveremo x P y.

Facciamo un esempio. Apparently graffiti things such as this and this have appeared in Bruxelles and who knows where else. I must say it's not too common seeing mathematical graffiti, so let's have a look at them a little bit closer. Let's start with a transcription:. There's a few things I don't like about some choices made such as the choice of decimal separator —which could be avoided altogether, as we'll see later , but let's first try to understand what we have, as-is.

What we're looking at is the definition of five distinct functions of a single variables. Thus, another way to look at this is that we have an algebraic description of five curves. The obvious implication here would be that, if we were to plot these curves, we'd get another picture, the actual , hidden, graffiti.