How to display a Date with TextField Click with Swift

Displaying Date with TextField Click with Swift

First you must have a textfield connected to storyboard.


@IBOutlet weak var txtField_helloDatePicker: UITextField!

Next you should have a constant of UIDatePicker type.

let datePicker = UIDatePicker()

Then inside the viewDidLoad function, you should set the date picker mode to date

datePicker.datePickerMode = UIDatePickerMode.Date

Then to show the date picker on tapping the textfield, you should set the input view of that textfield to the date picker
        
txtField_helloDatePicker.inputView = datePicker

Then to manage the changes in the selection
        

datePicker.addTarget(self, action: Selector("datePickerChanged:"), forControlEvents: UIControlEvents.ValueChanged)

Then to handle the action  of the date picker, you should have 

func datePickerChanged(datePicker:UIDatePicker) {
        println("time picker changed for ceremony")
        
        var dateFormatter = NSDateFormatter()
        
        dateFormatter.timeStyle = NSDateFormatterStyle.ShortStyle
        
        var strDate = dateFormatter.stringFromDate(datePicker.date)
        
        txtField_helloDatePicker.text = strDate
 }

Summary of Code:

datePicker.datePickerMode = UIDatePickerMode.Date
        
txtField_helloDatePicker.inputView = datePicker
        
datePicker.addTarget(self, action: Selector("datePickerChanged:"), forControlEvents: UIControlEvents.ValueChanged)

func datePickerChanged(datePicker:UIDatePicker) {
        println("time picker changed for ceremony")
        
        var dateFormatter = NSDateFormatter()
        
        dateFormatter.timeStyle = NSDateFormatterStyle.ShortStyle
        
        var strDate = dateFormatter.stringFromDate(datePicker.date)
        
        txtField_helloDatePicker.text = strDate
 }

Common Swift collection casting or type conversion

Collection of Common Swift Casting  or Type Conversion

String to Double
var a  = "1.5"
var b  = (a as NSString).doubleValue

Double to String
var a : Double = 1.5
var b  = String(format:"%f", a)

Double to Int
var a : Double = 1.5
var b  = Double(a)

String to Float
var a  = "1.5"
var b  = (a as NSString).floatValue

String to Float
var a  = "1.5"
var b  = (a as NSString).intValue


How to Display an Alert View with Swift

Display an Alert View with Swift

First we'll create a constant of UIAlertController. Since we'll be using an alert, the UIAlertControllerStyle is set to default. It can have other values such as ActionSheet, Alert, and RawValue.

 let alertPrompt = UIAlertController(title: "Simple Alert View", message: "Hello, World", preferredStyle: UIAlertControllerStyle.Alert)

Next, we'll be adding a button inside the newly created alertcontroller. We'll set the handler to nil since we don't need any action yet. But you may call a function when a button is click by setting the handler to some function. 
Note: The handler function must have a parameter of UIAlertAction type (e.g. func buttonHandler(alertView: UIAlertAction){})

alertPrompt.addAction(UIAlertAction(title: "Ok", style: UIAlertActionStyle.Default, handler:nil))
            
And lastly in order to show the alert view, you must present it by the calling the presentViewController method.


presentViewController(alertPrompt, animated: true, completion: nil)

Summary of Code:

Inside an action function, place the following line of codes.

let alertPrompt = UIAlertController(title: "Simple Alert View", message: "Hello, World", preferredStyle: UIAlertControllerStyle.Alert)

alertPrompt.addAction(UIAlertAction(title: "Ok", style: UIAlertActionStyle.Default, handler:nil))
            

presentViewController(alertPrompt, animated: true, completion: nil)

or to handle a button click in the alert view, we can have

let alertPrompt = UIAlertController(title: "Simple Alert View", message: "Hello, World", preferredStyle: UIAlertControllerStyle.Alert)

alertPrompt.addAction(UIAlertAction(title: "Ok", style: UIAlertActionStyle.Default, handler:"btn_clicked"))
            

presentViewController(alertPrompt, animated: true, completion: nil)

Then we can declare the button handle:

func btn_clicked(alertView: UIAlertAction!){
        presentViewController(imagePicker, animated: true, completion: nil)
 println("Ok")
 }

Technical analysis of Qualys' GHOST

This morning, a leaked note from Qualys' external PR agency made us aware of GHOST. In this blog entry, our crack team of analysts examines the technical details of GHOST and makes a series of recommendations to better protect your enterprise from mishaps of this sort.





Figure 1: The logo of GHOST, courtesy of Qualys PR.



Internally, GHOST appears to be implemented as a lossy representation of a two-dimensional raster image, combining YCbCr chroma subsampling and DCT quantization techniques to achieve high compression rates; among security professionals, this technique is known as JPEG/JFIF. This compressed datastream maps to an underlying array of 8-bpp RGB pixels, arranged sequentially into a rectangular shape that is 300 pixels wide and 320 pixels high. The image is not accompanied by an embedded color profile; we must note that this poses a considerable risk that on some devices, the picture may not be rendered faithfully and that crucial information may be lost.



In addition to the compressed image data, the file also contains APP12, EXIF, and XMP sections totaling 818 bytes. This metadata tells us that the image has been created with Photoshop CC on Macintosh. Our security personnel notes that Photoshop CC is an obsolete version of the application, superseded last year by Photoshop CC 2014. In line with industry best practices and OWASP guidelines, we recommend all users to urgently upgrade their copy of Photoshop to avoid exposure to potential security risks.



The image file modification date returned by the HTTP server at community.qualys.com is Thu, 02 Oct 2014 02:40:27 GMT (Last-Modified, link). The roughly 90-day delay between the creation of the image and the release of the advisory probably corresponds to the industry-standard period needed to test the materials with appropriate focus groups.



Removal of the metadata allows the JPEG image to be shrunk from 22,049 to 21,192 bytes (-4%) without any loss of image quality; enterprises wishing to conserve vulnerability-disclosure-related bandwidth may want to consider running jhead -purejpg to accomplish this goal.



Of course, all this mundane technical detail about JPEG images distracts us from the broader issue highlighted by the GHOST report. We're talking here about the fact that the JPEG compression is not particularly suitable for non-photographic content such as logos, especially when the graphics need to be reproduced with high fidelity or repeatedly incorporated into other work. To illustrate the ringing artifacts introduced by the lossy compression algorithm used by the JPEG file format, our investigative team prepared this enhanced visualization:





Figure 2: A critical flaw in GHOST: ringing artifacts.



Artifacts aside, our research has conclusively showed that the JPEG formats offers an inferior compression rate compared to some of the alternatives. In particular, when converted to a 12-color PNG and processed with pngcrush, the same image can be shrunk to 4,229 bytes (-80%):





Figure 3: Optimized GHOST after conversion to PNG.



PS. Tavis also points out that ">_" is not a standard unix shell prompt. We believe that such design errors can be automatically prevented with commercially-available static logo analysis tools.



PPS. On a more serious note, check out this message to get a sense of the risk your server may be at. Either way, it's smart to upgrade.

How to get the client's ip address in jax rs and ws

In this tutorial I'll not teach how to create jax-rs nor jax-ws web service but rather will show how we can get the client's or the caller's ip address who invoke the service.

In jax-rs:
//inject
@Context
private HttpServletRequest httpServletRequest;

public void serviceMethod() {
//get the ip
log.debug("IP=" + httpServletRequest.getRemoteAddr());
}
In jax-ws, we inject a different resource but it's almost the same.

@Resource
private WebServiceContext wsContext;

public void serviceMethod() {
MessageContext mc = wsContext.getMessageContext();
HttpServletRequest req = (HttpServletRequest) mc.get(MessageContext.SERVLET_REQUEST);

log.debug("IP=" + req.getRemoteAddr());
}

If you need help building jax-rs service here's how I'm doing things: http://czetsuya-tech.blogspot.com/2014/11/rest-testing-with-arquillian-in-jboss.html.

On the Instability of Unilateral Fixed Exchange Rate Regimes

Was there an easy way to bet on a CHF/EUR appreciation? Because if there was, we must all be kicking ourselves for not exploiting that trade!

The difference between winning and losing in the FX market is usually just a matter of luck. To a first approximation, floating exchange rates seem to follow a random walk (see here). But the trade I'm describing here is one I think we should have expected to pay off for reasons beyond pure luck. That is, there is a pretty sensible theory of currency crises that might have guided our investment strategy in the present context. In particular, I'm thinking of Paul Krugman's (1979) model, which he describes here.

The basic idea is as follows. Suppose that a central bank wants to peg its currency relative to some other currency. Suppose that it does so unilaterally. The success of the peg will depend critically on its perceived credibility. This credibility may depend on, among other things, the amount of foreign reserves held by our intrepid central bank. To defend the peg, the central bank must stand ready to buy its own currency on the FX market, which it does so by selling off its stock of foreign reserves.

A unilateral peg of this sort is just ripe for speculation. The two most likely outcome in this case are (1) the peg holds or (2) the peg fails (the domestic currency depreciates). The trade in this case is to go short on the pegging bank's currency and long in the foreign currency. A speculator either breaks even if (1) or wins if (2). It's a can't lose proposition (but please don't try this at home kids). Rational speculators, recognizing the opportunity, start shorting the pegged currency. If they do so en masse, our little central bank will soon run out of reserves and be forced to abandon the peg--a self-fulfilling prophecy.

I didn't spot this in the case of the SNB because, well, Switzerland is not a banana republic--the Swiss Franc is considered a safe-haven security. And the SNB was pegging because it was worried about currency appreciation--not the usual concerns about excess volatility or depreciation. Of course, there was never any danger of the SNB running out of reserves--they can print all the Francs they want! So what was the danger?

Central bankers are by nature a highly conservative bunch. They become uncomfortable with things that are unfamiliar. Like balance sheets the size of the moon, for example. With an ECB QE policy on the horizon, there was the prospect of EUR for CHF conversions proceeding at an even more rapid rate--leading to a very, very large SNB balance sheet. My claim is this: we should have guessed that the SNB would have at some point in this process lost its nerve and abandoned the peg, allowing their currency to appreciate (And if it didn't lose its nerve, the peg would have been maintained, so we would not have lost on the other likely outcome of my proposed bet). Rational speculators anticipating this should have ... oh well, forget it. (Let's try it next time and see what happens?).

As for the SNB abandoning its peg, especially the way it did, well, it just seems crazy to me. It would have made sense if one thought that EUR inflation was likely to take off. But all the worry at present is directed toward the prospect of EUR deflation. Yes, that's right, the SNB is stocking up on a currency whose purchasing power is projected to increase. And as for being concerned about EUR inflation because of QE, it seems unlikely to me that the ECB wouldn't be willing and able to defend its low inflation target.

In short, I think the SNB could have let it's balance sheet grow much larger without any significant economic repercussions. Instead, by removing the peg as they did, they suffered a huge and needless capital loss on their EUR assets. Strange move. But how can we argue against the past success of Swiss bankers?

On the plus side, I suppose we can no longer claim the Swiss to be boring

afl-fuzz: making up grammar with a dictionary in hand

One of the most significant limitations of afl-fuzz is that its mutation engine is syntax-blind and optimized for compact data formats, such as binary files (e.g., archives, multimedia) or terse human-readable languages (RTF, shell scripts). Any general-purpose fuzzer will have a harder time dealing with more verbose dialects, such as SQL or HTTP. You can improve your odds in a variety of ways, and the results can be surprisingly good - but ultimately, it's never easy to get from Set-Cookie: FOO=BAR to Content-Length: -1 by randomly flipping bits.



The common wisdom is that if you want to fuzz data formats with such ornate grammars, you need to build an one-off, protocol-specific mutation engine with the appropriate syntax templates baked in. Of course, writing such code isn't easy. In essence, you need to manually build a model precise enough so that the generated test cases almost always make sense to the targeted parser - but creative enough to trigger unintended behaviors in that codebase. It takes considerable experience and a fair amount of time to get it just right.



I was thinking about using afl-fuzz to reach some middle ground between the two worlds. I quickly realized that if you give the fuzzer a list of basic syntax tokens - say, the set of reserved keywords defined in the spec - the instrumentation-guided nature of the tool means that even if we just mindlessly clobber the tokens together, we will be able to distinguish between combinations that are nonsensical and ones that actually follow the rules of the underlying grammar and therefore trigger new states in the instrumented binary. By discarding that first class of inputs and refining the other, we could progressively construct more complex and meaningful syntax as we go.



Ideas are cheap, but when I implemented this one, it turned out to be a good bet. For example, I tried it against sqlite, with the fuzzer fed a collection of keywords grabbed from the project's docs (-x testcases/_extras/sql/). Equipped with this knowledge, afl-fuzz quickly spewed out a range of valid if unusual statements, such as:


select sum(1)LIMIT(select sum(1)LIMIT -1,1);<br />select round( -1)````;<br />select group_concat(DISTINCT+1) |1;<br />select length(?)in( hex(1)+++1,1);<br />select abs(+0+ hex(1)-NOT+1) t1;<br />select DISTINCT "Y","b",(1)"Y","b",(1);<br />select - (1)AND"a","b";<br />select ?1in(CURRENT_DATE,1,1);<br />select - "a"LIMIT- /* */ /* */- /* */ /* */-1;<br />select strftime(1, sqlite_source_id());



(It also found a couple of crashing bugs.)



All right, all right: grabbing keywords is much easier than specifying the underlying grammar, but it still takes some work. I've been wondering how to scratch that itch, too - and came up with a fairly simple algorithm that can help those who do not have the time or the inclination to construct a proper dictionary.



To explain the approach, it's useful to rely on the example of a PNG file. The PNG format uses four-byte, human-readable magic values to indicate the beginning of a section, say:


89 50 4e 47 0d 0a 1a 0a 00 00 00 0d 49 48 44 52 | .PNG........IHDR

00 00 00 20 00 00 00 20 02 03 00 00 00 0e 14 92 | ................



The algorithm in question can identify "IHDR" as a syntax token by piggybacking on top of the deterministic, sequential bit flips that are already being performed by afl-fuzz across the entire file. It works by identifying runs of bytes that satisfy a simple property: that flipping them triggers an execution path that is distinct from the product of flipping stuff in the neighboring regions, yet consistent across the entire sequence of bytes.



This signal strongly implies that touching any of the affected bytes causes the failure of an underlying atomic check, such as header.magic_value == 0xDEADBEEF or strcmp(name, "Set-Cookie"). When such a behavior is detected, the entire blob of data is added to the dictionary, to be randomly recombined with other dictionary tokens later on.



This second trick is not a substitute for a proper, hand-crafted list of keywords; for one, it will only know about the syntax tokens that were present in the input files, or could be synthesized easily. It will also not do much when pitted against optimized, tree-based parsers that do not perform atomic string comparisons. (The fuzzer itself can often clear that last obstacle anyway, but the process will be slow.)



Well, that's it. If you want to try out the new features, click here and let me know how it goes!

On the Want of Bold, Persistent Experimentation

How should policymakers react to an economic crisis or ongoing economic malaise--an event that has taken them by surprise and/or left them searching for answers?

Brad DeLong's prescription is to follow the example set by FDR in the 1930s: How to Fix the Economy: "Try Everything".  He favorably quotes the former president, who once proclaimed:
The country needs and ... demands bold, persistent experimentation,” he said in 1932. “Take a method and try it. If it fails, admit it frankly, and try another. But above all, try something."
In some ways, this sounds admirable. But in other ways, it sounds...well, it sounds a bit crazy. Even DeLong acknowledges this when he writes:
To be sure, Roosevelt’s New Deal policies sometimes conflicted with one another, and quite a few of them were counterproductive. But, by trying everything, and then scaling up the most successful policies, Roosevelt was ultimately able to turn the economy around. 
Hmm. Ultimately turned the economy around? I guess so...even if it did take 8 years. One has to wonder how long it would have taken if FDR had done nothing at all?  I also wonder which of the many (some declared unconstitutional) experiments ultimately turned the economy around. The bold experiment of declaring war in 1941?


One of the problems associated with macroeconomic experimentation, apart from the fact that most experiments fail, is the aura of uncertainty it engenders. The appearance of senior leaders resorting to bold and persistent experiments is unbecoming and even a little scary. What will they think of next?! Should I invest now, or should I wait?!  It does not take a rocket scientist to appreciate the effect that policy uncertainty might have on prolonging an economic slump. I'm not sure how important this force is quantitatively (because it is hard to measure) but I don't think one can easily dismiss the role it can play in an economic crisis and recovery. Certainly, there is no shortage of narratives out there that blame FDR's "bold and persistent experiments" for transforming a recession into depression (many also blame President Hoover for the same reason).

Truth be told, I doubt that DeLong actually endorses "bold, persistent experimentation" in the sense of "anything goes." The set of "bold, persistent experiments" after all is very, very large. As he suggests, we already possess a set of tools--we (think) we know the nature of promising interventions--if only those squabbling politicians would employ them! In addition, he provides a short list of  potential interventions (some of which, like QE, were actually implemented).

It seems that DeLong was motivated to write this piece mainly to criticize Martin Feldstein's needlessly inflammatory language in promoting an otherwise sensible policy proposal. I do agree with DeLong on that sentiment. But if this was the intended purpose of his article, then why invoke Hoover-FDR fables?*  And why speak favorably of the FDR-style "kitchen sink" approach to macro policy?  After all, if we don't know what we're doing, then isn't the principle of primum non nocere at least as compelling?

----------------------------------------------------------------------------------------------------------------
*Note: FDR actually criticized Hoover in 1932 for his "reckless and extravagant" fiscal policy. Consider the following data:

Enig med Cappelen


Rett adressat er imidlertid Høyesterett, som skriver: «Forskningssjef Ådne Cappelen ved Statistisk sentralbyrå, som avga forklaring som privatengasjert sakkyndig vitne for Kreutzer i lagmannsretten, ga i sin skriftlige erklæring uttrykk for at en finansiell formue plassert i hovedsak i statsobligasjoner, men med en tredjedel i aksjer på Oslo Børs, "ikke er en veldig risikopreget tilpasning"».

I avsnittet som Høyesterett viser til fremgår det også at realavkastningen i så fall blir fem prosent.

Det er all grunn til å tro at dette har vært avgjørende for Høyesterett, da det ellers er få argumenter for en rente på hele fire prosent. Avgjørelsen fremstår som litt merkelig selv når vi tar med Cappelens anslag. Uten Cappelen forsvinner kontakten mellom rettens og de uavhengige ekspertenes vurdering. Snittanslaget av rettens egne eksperter var 2,4 prosent.

Avsnittet som Høyesterett har bitt seg fast i er etter alt å dømme revet ut av sin sammenheng. Det sentrale var opprinnelig vergemålsparagrafens begrensning i plassering, og notatet må leses ut fra det. Om jeg var Høyesterett ville jeg nok vurdert å kalle inn Cappelen som vitne før jeg balanserte en så viktig avgjørelse på dette avsnittet.

La oss legge til grunn at riktig rente er tre prosent. I så fall vil erstatninger de neste tjue årene (til neste gang Høyesterett fastsetter kapitaliseringsrenten) dekke mindre enn 80 prosent av tilkjent beløp. Avsnittet nederst på side tre i Cappelens notat har nok fått langt større betydning enn forfatteren hadde forestilt seg. 

How to implement no caching on JSF pages

Currently there are 2 ways I know that can do this.

1.) By implementing Filter and manually overriding the cache controls. See code below (note: not my original I found this somewhere else long time ago):

@WebFilter(servletNames = { "Faces Servlet" })
public class NoCacheFilter implements Filter {

@Override
public void doFilter(ServletRequest req, ServletResponse res,
FilterChain chain) throws IOException, ServletException {
HttpServletRequest request = (HttpServletRequest) req;
HttpServletResponse response = (HttpServletResponse) res;

if (!request.getRequestURI().startsWith(
request.getContextPath() + ResourceHandler.RESOURCE_IDENTIFIER)) {
// Skip JSF resources (CSS/JS/Images/etc)
response.setHeader("Cache-Control",
"no-cache, no-store, must-revalidate"); // HTTP 1.1.
response.setHeader("Pragma", "no-cache"); // HTTP 1.0.
response.setDateHeader("Expires", 0); // Proxies.
}

chain.doFilter(req, res);
}

@Override
public void init(FilterConfig filterConfig) throws ServletException {

}

@Override
public void destroy() {

}

}

2.) The one I preferred, using omnifaces' CacheControlFilter. More information available at: http://showcase.omnifaces.org/filters/CacheControlFilter. To use just need to add the filter and integrate with facesServlet.

<filter>
<filter-name>noCache</filter-name>
<filter-class>org.omnifaces.filter.CacheControlFilter</filter-class>
</filter>

<filter-mapping>
<filter-name>noCache</filter-name>
<servlet-name>Faces Servlet</servlet-name>
</filter-mapping>

Bedre, men ikke bra nok

En fersk avgjørelse i Høyesterett kan øke erstatningsutbetalinger fra forsikringsselskap med opp mot 30 prosent. Det er sannsynligvis for lite.

Når invaliditet dekkes av forsikringen, får den skadede kompensert for framtidig tap. Oppgjøret er imidlertid et engangsbeløp, så det forventes at skadede bruker avkastningen fra forsikringsutbetalingen til å dekke de årlige tapene.

Rettens anslag på avkastningen kalles for «kapitaliseringsrenten». Høyesterett har nettopp bestemt at den skal reduseres fra fem til fire prosent. Når forventet avkastning settes ned, må forsikringsselskapene betale ut et høyere beløp for at erstatningen skal gi nok avkastning.

Det er bra at Høyesterett omsider har tatt tak i denne saken. For høy kapitaliseringsrente har tidligere gitt for lave erstatninger.

De siste hundre årene har en portefølje med 80 % obligasjoner og 20 % aksjer gitt en realavkastning på 2,3 prosent, på nivå med kapitaliseringsrenten i USA og England. Det er bred enighet blant økonomer at kapitalavkastningen blir enda lavere i fremtiden.

Høyesteretts egen ekspertgruppe besto av noen av landets fremste akademikere på området; Thore Johnsen, Espen R. Moen og Steinar Holden. Disse ga hver sine estimat på avkastningen til en portefølje med moderat risiko (20 % aksjer), og gjennomsnittet var 2,4 prosent. Dette er ganske langt unna Høyesteretts avgjørelse på fire.

Hvordan høyesterett har havnet så høyt er et lite mysterium. Forsikringsselskapets egen ekspert, som er direktør i Handelsbanken, anslo realavkastningen til 4,6-4,7 prosent. Handelsbanken blir rammet av en lavere kapitaliseringsrente i form av økte forsikringsutbetalinger. Det er også betydelige faglige innvendinger mot Handelsbankens estimat, både når gjelder utvalg og valg av obligasjonsindeks. Høyesterett kan ikke ha lagt særlig vekt på dette anslaget.

Ådne Cappelen, forskningsdirektør i SSB, anslo i samme sak avkastningen til fem prosent. Dette er veldig høyt sammenlignet med ekspertgruppens, noe som hovedsakelig skyldes at Cappelens beregning er basert på den ganske korte perioden 1993-2012. Som vi ser av figuren er ikke denne perioden særlig representativ. Samme kritikk kan rettes mot Handelsbankens beregning.

I tillegg antok alle sakkyndige, utenom Cappelen, en inflasjon lik sentralbankens rentemål. Justerer vi for det og legger Cappelens beregning til som ett av fire anslag, blir de fire uavhengige ekspertenes anslag i snitt 2,9 prosent.

Formuleringen i dommen kan tyde på at Høyesterett mener at skadede burde kunne oppnå fire prosent avkastning ved å ta større risiko. Høyesteretts fire prosent avkastning tilsvarer en portefølje med 60 prosent aksjer, dersom vi legger gjennomsnittlig anslag fra de fire uavhengige ekspertene til grunn.

Én tredjedel av en slik portefølje ville forsvunnet under finanskrisen. Det stemmer dårlig med at « … det ikke kan kreves at skadelidte foretar investeringer som det knytter seg noen særlig risiko til», slik Høyesterett selv har formulert det. De fleste amatører ville også fått panikk i en slik situasjon, og solgt seg ut med tap. Det er ingen grunn til å tro at en trafikkskadd ville opptrådt annerledes. Det er slett ikke usannsynlig at en tilsvarende krise vil  oppstå i løpet av de neste tjue årene.

Høyesterett kommer selv med forslag til investeringer som den mener kan gi avkastning nesten uten risiko. Finanskrisen viste at slikt ikke finnes. Hadde en trafikkskadd amatørinvestor fulgt rettens investeringsråd i 2008 og satt pengene i bankobligasjoner i stedet for statsobligasjoner, ville tapene blitt langt større.

Retten peker riktig nok på uttalelsen fra forsikringsselskapets ekspert om at det er vanlig for pensjonskasser å ha 50 % i aksjer, men er det rimelig å sammenligne en profesjonell pensjonsforvalter med tilnærmet uendelig tidshorisont og en amatør med begrenset livslengde?

En mulig forklaring på det høye anslaget kan være at estimatene er usikre, og at «… usikkerheten [kan] tilsi forsiktighet med å gjøre store endringer i forhold til dagens rettstilstand.», slik retten påpeker.

Dette er imidlertid et spørsmål om fordeling av penger. For høy rente gir for mye til forsikringsselskapet. For lav rente gir for mye til den skadede. Forskjellen er at kostnadene til forsikringsselskapet havner på en bunnlinje.  Sitatet over kan tyde på at forsikringsselskapenes behov derved tillegges større vekt. Men tapene for skadede er akkurat like reelle som for forsikringsselskapets, selv om det ikke vises i et regnskap.




(undertegnede har vært engasjert av skadelidte i denne saken som sakkyndig konsulent og vitne)