Byråkratisk aksjetreff

Trond Giske kjøpte Cermaq aksjer for 1,6 milliarder kroner som statsråd i fjor. Gevinsten ble 600 millioner. Departementet handlet aksjene med ønske om kortsiktig profitt, og lyktes. Handelen viser at staten kan tjene på ikke å selge til første og beste kjøper.  

Cermaq-aksjene gav en avkastning på 38 % på et drøyt år. Det er en solid prestasjon sammenlignet med indeksen som bare steg med 22 % i samme periode. Prestasjonen er enda mer imponerende når vi vet at hensikten med investeringen var kortsiktig gevinst.  Aksjekjøpet økte eierandelen fra 44 % til 59 %, men det fremgår av fullmakten fra Stortinget at kjøpet slett ikke var motivert ut fra et rødgrønt ønske om mer statlig eierskap. Departementet så rett og slett en kortsiktig fortjenestemulighet i aksjemarkedet. Giske mente aksjen var undervurdert.

Fjorårets kjøp skjedde etter at John Fredriksens Marine Harvest bød 107 kroner per aksje. Dette tilsvarer en pris i dag på 55 kroner, etter utbytter i 2014. Fredriksens bud fremstår nå som en vits. Staten får nesten det dobbelte av Mitsubishi, som betaler 96 kroner per aksje. Med dagens inntjening vil det gi en avkastning til det japanske selskapet på rundt 6 % av investert kapital. Hadde departementet akseptert Fredriksens lave bud ville, den kypriotiske milliardæren oppnådd en avkastning på nesten det dobbelte.

Prisen staten har oppnådd er langt høyere i forhold til inntjening enn gjennomsnitt for Oslo Børs. Inntjeningen må nok øke en del for at dette skal bli lønnsomt for Mitsubishi, men det kan jo skje.

Staten kjøpte sannsynligvis aksjene for å komme i en bedre posisjon før nedsalg. Ambisjonen var «… at posten realiseres i en senere industriell transaksjon» i følge fullmakten. Det var allerede besluttet at eierandelen skulle ned i 34 %, så oppkjøpet var opplagt en del av en salgsprosess. For dagens regjering har det utvilsomt vært en stor fordel å selge Cermaq som majoritetseier.

Nå fikk riktignok departementet litt hjelp. Både styret i Cermaq og den eksterne rådgiveren fra Fondsfinans mente selskapet var undervurdert, så det er foreløpig ikke behov for at næringsministeren oppretter «Avdeling for Kortsiktig Børsspekulasjon». Men hvilken lærdom bør vi trekke av departementets vellykkede Cermaq-spekulasjon?

Konklusjonen bør være at departementet har håndtert nedsalget i Cermaq på en fornuftig måte. Staten bør fortsatt ikke eie vanlige kommersielle bedrifter. Nær sagt enhver kommersiell bedrift kan fremstilles som strategisk viktig for staten, men argumentasjonen blir ofte ganske konstruert. 

Cermaq-salget viser imidlertid at Staten har mye å tjene på å være en fornuftig selger.
I utgangspunktet gir aksjekursen en god pekepinn på en rettferdig verdi på selskapet, så departementet kunne ut fra en slik tankegang solgt til Fredriksen i fjor. Men majoritetsandel omsettes ofte for mer enn aksjekursen.

En majoritetseier kan endre på driften eller finne stordriftsfordeler. Da kan overskuddet øke. Det er derfor ikke uvanlig at kjøperen betaler mer enn børskurs for virksomheter. Det er ingen grunn til at staten ikke skal forsøke å hente ut mest mulig av en slik gevinst.

Det betyr at når staten selger virksomheter bør den bruke tid og sørge for flest mulig interessenter. Salget av kommunale kraftverk rundt årtusenskiftet bar for eksempel i mange tilfeller preg av en uprofesjonell prosess der enkelte kommuner aksepterte det første og beste tilbudet. Kraftselskapene var jo heller ikke børsnotert, så kommunen hadde ikke noen markedsverdi å ta utgangspunkt i.

Det er altså fortsatt gode grunner til at staten bør selge seg ned i alminnelig kommersiell virksomhet. Plassering i Oljefondet vil alltid gi lavere risiko i forhold til avkastning, så lenge salget skjer til markedspris. Det er likevel ingen grunn til å godta det første og beste budet. Staten bør sørge for å hente ut mest mulig av verdiene i selskapene den selger.

How to set up an IT company in the Philippines


Setting up an IT company in the Philippines can be a challenging task. You should know where to set it up and have enough resources, both technical and human to keep it up and running. As such, there are some considerations that you need to take into account such as:

The name of the company. You most probably already have a working idea of what your company needs. However, you need to know the required permits in order to set it up. You can come personally to the Philippines and inquire about it or talk to someone first who has the know-how of acquiring such permits and licenses. This way, you save up on travel expenses.

You can also search for names that are already registered in the Philippines in order to avoid confusion. This is an important aspect when it comes to branding. You’d be working to make your company big and famous and you don’t want a company with the same name as yours to benefit from it. So, choose wisely.

Business Location. The metropolis may be the ideal place to start your company, however, you should not underestimate the potential of other locations. You can get this by getting a trusted contact in the Philippines who already knew the market that your company will cater to.


Human and Technical Resources. The human resources will probably take care of hiring the IT professionals you need. However, it would be best that you both get one each – one trusted person for human resources and one for the IT part of the business. Two heads are better than one, the good old adage says. You’d want someone who can easily identify great potentials, yet you also want someone who has the know-how both in terms of skills and knowledge when it comes to IT.

Research and create a business plan. A goal without a plan is meant to fail. However, a plan, without a goal will meet the same end. As such, research carefully on how you would want the business to spring and take root. Starting a business is one, making it work is another. When you already have contacts in the Philippines, confer with them on steps to take from the start up to where you want your business to be. This is an important aspect that you should consider. An entrepreneur should know what to do next when he has started.

Get a marketing plan. Marketing does not only mean advertising your products and services. Having a sound marketing plan should also mean that you have answers for any problems that may come up in putting up an IT business. Anticipate problems that can occur in setting up the business such as location issues and acquiring technical equipment. Once consumers are using your products and services, you should have a working plan should unexpected technical issues comes up to avoid unsatisfactory consumer experiences.

Once you already have these resources, tread slowly but surely. Anticipate changes and be ready to address problems. Establish open communication with your partners and soon, you’ll be reaping the benefits of establishing an IT company in the Philippines.

PSA: don't run 'strings' on untrusted files (CVE-2014-8485)

Many shell users, and certainly most of the people working in computer forensics or other fields of information security, have a habit of running /usr/bin/strings on binary files originating from the Internet. Their understanding is that the tool simply scans the file for runs of printable characters and dumps them to stdout - something that is very unlikely to put you at any risk.


It is much less known that the Linux version of strings is an integral part of GNU binutils, a suite of tools that specializes in the manipulation of several dozen executable formats using a bundled library called libbfd. Other well-known utilities in that suite include objdump and readelf.


Perhaps simply by the virtue of being a part of that bundle, the strings utility tries to leverage the common libbfd infrastructure to detect supported executable formats and "optimize" the process by extracting text only from specific sections of the file. Unfortunately, the underlying library can be hardly described as safe: a quick pass with afl (and probably with any other competent fuzzer) quickly reveals a range of troubling and likely exploitable out-of-bounds crashes due to very limited range checking, say:



$ wget http://lcamtuf.coredump.cx/strings-bfd-badptr2
...
$ strings strings-bfd-badptr2
Segmentation fault
...
strings[24479]: segfault at 4141416d ip 0807a4e7 sp bf80ca60 error 4 in strings[8048000+9a000]
...
while (--n_elt != 0)
if ((++idx)->shdr->bfd_section) ← Read from an attacker-controlled pointer
elf_sec_group (idx->shdr->bfd_section) = shdr->bfd_section; ← Write to an attacker-controlled pointer
...
(gdb) p idx->shdr
$1 = (Elf_Internal_Shdr *) 0x41414141


The 0x41414141 pointer being read and written by the code comes directly from that proof-of-concept file and can be freely modified by the attacker to try overwriting program control structures. Many Linux distributions ship strings without ASLR, making potential attacks easier and more reliable - a situation reminiscent of one of the recent bugs in bash.


Interestingly, the problems with the utility aren't exactly new; Tavis spotted the first signs of trouble some nine years ago.


In any case: the bottom line is that if you are used to running strings on random files, or depend on any libbfd-based tools for forensic purposes, you should probably change your habits. For strings specifically, invoking it with the -a parameter seems to inhibit the use of libbfd. Distro vendors may want to consider making the -a mode default, too.




PS. I actually had the libbfd fuzzing job running on this thing!


What's holding back female employment?

Almost four years ago, I asked whether the U.S. was in for a labor market slump similar to the slump experience in Canada during the 1990's. Evidently, the answer turned out to be yes.

How is the U.S. faring relative to Canada back then? American prime-age males seem to be tracking their Canadian counterparts, both in terms of employment-to-population ratios and in labor force participation rates. American females, on the other hand, appear to be lagging behind their Canadian counterparts. Let me show you some data.

Let's begin by looking at the employment ratio for prime-age males:


As you can see, the sharp drop and subsequent recovery dynamic for prime-age males is remarkably similar across these two countries and time periods. (The initial E-P ratio was about 87% for both countries; see here).

Here is what their labor force participation rates look like:


Again, the recovery dynamic looks almost identical (The initial part rate for Canada was 93%, for the US about 91%; see here).

Alright, now let's take a look at the same statistics for prime-age females. First, the employment ratios:


These dynamics look quite a bit different. The main effect of the recession in Canada was to slow down the growth rate in the employment ratio. In the U.S., the effect has been to reduce the employment ratio, with only a very weak sign of recovering in the past year.

Here is what the labor force participation rate dynamics look like:


Again, two very different recovery dynamics.

A colleague of mine suggested that state-level layoffs in education and government may explain a good part of the lackluster recovery dynamic for U.S. females. This is certainly worth looking into. However, if we take a look at the following diagram, we see that the discrepancy appears to have happened much earlier -- around 1997, in fact.


It seems unlikely to me that the divergence between Canadian and American prime-age females is driven by cyclical considerations (although, a small part of the recent gap may be). Work incentives are likely to have changed, although what these changes were, I do not yet know. In any case, I doubt that monetary policy is a tool that can be used to close this gap. I can think of plenty fiscal interventions that might help, however.

Addendum Oct. 22, 2014

My colleague, Maria Canon, points me to the following paper by Sharon Cohany and Emy Sok Trends in labor force participation of married mothers of infants, as well as this interesting set of slides by Jennifer Hunt: Female labor force participation: slack and reform.

And here's a real doozy "Universal Child Care, Maternal Labor Supply, and Family Well-Being" by Michael Baker, Jonathan Gruber, and Kevin Milligan (JPE 2008). From the abstract:
We analyze the introduction of highly subsidized, universally accessible child care in Quebec, addressing the impact on child care utilization, maternal labor supply, and family well-being. We find strong evidence of a shift into new child care use, although some crowding out of existing arrangements is evident. Maternal labor supply increases significantly. Finally, the evidence suggests that children are worse off by measures ranging from aggression to motor and social skills to illness. We also uncover evidence that the new child care program led to more hostile, less consistent parenting, worse parental health, and lower-quality parental relationships.

Why Outsource in the Philippines

Looking for a way to decrease your operating expenses while starting your business? Are you considering the possibility of outsourcing employees in the Philippines? Why not? Ever wonder why foreigners look for Filipinos to work for them? Why is there an increase in the number of BPO companies in the Philippines? The reason is plain and simple – Filipino workers are dedicated and hardworking individuals. Here are a couple of reasons more why you should outsource in the Philippines:

Why outsource in the philippines.png
Outsourcing in the Philippines

Filipinos speak English. It is uncommon to find a Filipino who cannot speak English. A foreigner can get lost during his tour of the city of Manila or Makati, but asking a few directions in English will bring him to his destination. Some may even go out of their way to accompany you.

Filipinos are hardworking. According to Philippine Statistics Authority from April to September 2012 alone was estimated to be at 2.2 million. This figure alone proves how Filipinos love working and earning for their family.

Filipinos will not overcharge your for their quality work. Aside from BPO companies, Filipinos are also found doing freelance work online. Most of the time, you get quality work for a minimum fee.

Now that you have a fair amount of idea how beneficial to outsource in the Philippines, here are the most common industries that outsource work employees in the Philippines:
Call centers. One of the most lucrative industries in the Philippines is the call center. This may be the reason that you can find them in almost every city in the country.

workers in the philippines.png
Outsource Filipino employees
IT services. Programming and software development are one of the many jobs that are being outsourced by foreign employers and companies in the Philippines. You can find them either through companies or online job networks. It is a wonder that despite the limited resources, IT services of the Philippines are well sought for.

Search engine optimization. SEO work does not require that you see your employees. Most Filipinos can do SEO work with simple instructions and a couple of hours in training. If you are a business looking to start an SEO company, outsourcing employees to do the work will save you a lot of money.

Internet marketing. Online presence is an important aspect of every business. Most of their marketing are accomplished online. Why should you be left behind if you can get the right employees by outsourcing?

Virtual Staff or employees. Web content, email marketing, meeting scheduler, email manager, customer care – you name it. You will surely find the right employees to do the work by outsourcing in the Philippines.

The internet is a great provider not only of information, but also of employees looking for work. Worldwide, Filipinos are working at specific countries or online providing quality services to different foreign employees. Why? This is because outsourcing can save a business from operational costs. This reason gives both parties the advantage – Filipinos can work abroad without leaving the comfort of their home while companies save on their business operation expenses - the perfect combination.

Two more browser memory disclosure bugs (CVE-2014-1580 and #19611cz)

To add several more trophies to afl's pile of image parsing memory disclosure vulnerabilities:





  • MSFA 2014-78 (CVE-2014-1580) fixes another case of uninitialized memory disclosure in Firefox - this time, when rendering truncated GIF images on <canvas>. The bug was reported on September 5 and fixed today. For a convenient test case, check out this page. Rough timeline:




    • September 5: Initial, admittedly brief notification to vendor, including a simple PoC.
    • September 5: Michael Wu confirms the exposure and pinpoints the root cause. Discussion of fixes ensues.
    • September 9: Initial patch created.
    • September 12: Patch approved and landed.
    • October 2: Patch verified by QA.
    • October 13: Fixes ship with Firefox 33.






  • MSRC case #19611cz (MS14-085) is a conceptually similar bug related to JPEG DHT parsing, seemingly leaking bits of stack information in Internet Explorer. This was reported to MSRC on July 2 and hasn't been fixed to date. Test case here. Rough timeline:



    • July 2: Initial, admittedly brief notification to vendor, mentioning the disclosure of uninitialized memory and including a simple PoC.
    • July 3: MSRC request to provide "steps and necessary files to reproduce".
    • July 3: My response, pointing back to the original test case.
    • July 3: MSRC response, stating that they are "unable to determine the nature of what I am reporting".
    • July 3: My response, reiterating the suspected exposure in a more verbose way.
    • July 4: MSRC response from an analyst, confirming that they could reproduce, but also wondering if "his webserver is not loading up a different jpeg just to troll us".
    • July 4: My response stating that I'm not trolling MSRC.
    • July 4: MSRC opens case #19611cz.
    • July 29: MSRC response stating that they are "unable identify a way in which an attacker would be able to propagate the leaked stack data back to themselves".
    • July 29: My response pointing the existence of the canvas.toDataURL() API in Internet Explorer, and providing a new PoC that demonstrates the ability to read back data.
    • September 24: A notification from MSRC stating that the case has been transferred to a new case manager.
    • October 7: My response noting that we've crossed the 90-day mark with no apparent progress made, and that I plan to disclose the bug within a week.
    • October 9: Acknowledgment from MSRC.




Well, that's it. Enjoy!

Fuzzing random programs without execve()

The most common way to fuzz data parsing libraries is to find a simple binary that exercises the interesting functionality, and then simply keep executing it over and over again - of course, with slightly different, randomly mutated inputs in each run. In such a setup, testing for evident memory corruption bugs in the library can be as simple as doing waitpid() on the child process and checking if it ever dies with SIGSEGV, SIGABRT, or something equivalent.



This approach is favored by security researchers for two reasons. Firstly, it eliminates the need to dig into the documentation, understand the API offered by the underlying library, and then write custom code to stress-test the parser in a more direct way. Secondly, it makes the fuzzing process repeatable and robust: the program is running in a separate process and is restarted with every input file, so you do not have to worry about a random memory corruption bug in the library clobbering the state of the fuzzer itself, or having weird side effects on subsequent runs of the tested tool.



Unfortunately, there is also a problem: especially for simple libraries, you may end up spending most of the time waiting for execve(), the linker, and all the library initialization routines to do their job. I've been thinking of ways to minimize this overhead in american fuzzy lop, but most of the ideas I had were annoyingly complicated. For example, it is possible to write a custom ELF loader and execute the program in-process while using mprotect() to temporarily lock down the memory used by the fuzzer itself - but things such as signal handling would be a mess. Another option would be to execute in a single child process, make a snapshot of the child's process memory and then "rewind" to that image later on via /proc/pid/mem - but likewise, dealing with signals or file descriptors would require a ton of fragile hacks.



Luckily, Jann Horn figured a different, much simpler approach, and sent me a patch for afl out of the blue :-) It boils down to injecting a small piece of code into the fuzzed binary - a feat that can be achieved via LD_PRELOAD, via PTRACE_POKETEXT, via compile-time instrumentation, or simply by rewriting the ELF binary ahead of the time. The purpose of the injected shim is to let execve() happen, get past the linker (ideally with LD_BIND_NOW=1, so that all the hard work is done beforehand), and then stop early on in the actual program, before it gets to processing any inputs generated by the fuzzer or doing anything else of interest. In fact, in the simplest variant, we can simply stop at main().



Once the designated point in the program is reached, our shim simply waits for commands from the fuzzer; when it receives a "go" message, it calls fork() to create an identical clone of the already-loaded program; thanks to the powers of copy-on-write, the clone is created very quickly yet enjoys a robust level of isolation from its older twin. Within the child process, the injected code returns control to the original binary, letting it process the fuzzer-supplied input data (and suffer any consequences of doing so). Within the parent, the shim relays the PID of the newly-crated process to the fuzzer and goes back to the command-wait loop.



Of course, when you start dealing with process semantics on Unix, nothing is as easy as it appears at first sight; here are some of the gotchas we had to work around in the code:



  • File descriptor offsets are shared between processes created with fork(). This means that any descriptors that are open at the time that our shim is executed may need to be rewound to their original position; not a significant concern if we are stopping at main() - we can just as well rewind stdin by doing lseek() in the fuzzer itself, since that's where the descriptor originates - but it can become a hurdle if we ever aim at locations further down the line.



  • In the same vein, there are some types of file descriptors we can't fix up. The shim needs to be executed before any access to pipes, character devices, sockets, and similar non-resettable I/O. Again, not a big concern for main().



  • The task of duplicating threads is more complicated and would require the shim to keep track of them all. So, in simple implementations, the shim needs to be injected before any additional threads are spawned in the binary. (Of course, threads are rare in file parser libraries, but may be more common in more heavyweight tools.)



  • The fuzzer is no longer an immediate parent of the fuzzed process, and as a grandparent, it can't directly use waitpid(); there is also no other simple, portable API to get notified about the process' exit status. We fix that simply by having the shim do the waiting, then send the status code to the fuzzer. In theory, we should simply call the clone() syscall with the CLONE_PARENT flag, which would make the new process "inherit" the original PPID. Unfortunately, calling the syscall directly confuses glibc, because the library caches the result of getpid() when initializing - and without a way to make it reconsider, PID-dependent calls such as abort() or raise() will go astray. There is also a library wrapper for the clone() call that does update the cached PID - but the wrapper is unwieldy and insists on messing with the process' stack.



    (To be fair, PTRACE_ATTACH offers a way to temporarily adopt a process and be notified of its exit status, but it also changes process semantics in a couple of ways that need a fair amount of code to fully undo.)



Even with the gotchas taken into account, the shim isn't complicated and has very few moving parts - a welcome relief compared to the solutions I had in mind earlier on. It reads commands via a pipe at file descriptor 198, uses fd 199 to send messages back to parent, and does just the bare minimum to get things sorted out. A slightly abridged verion of the code is:


__afl_forkserver:

/* Phone home and tell the parent that we're OK. */

pushl $4 /* length */
pushl $__afl_temp /* data */
pushl $199 /* file desc */
call write
addl $12, %esp

__afl_fork_wait_loop:

/* Wait for parent by reading from the pipe. This will block until
the parent sends us something. Abort if read fails. */

pushl $4 /* length */
pushl $__afl_temp /* data */
pushl $198 /* file desc */
call read
addl $12, %esp

cmpl $4, %eax
jne __afl_die

/* Once woken up, create a clone of our process. */

call fork

cmpl $0, %eax
jl __afl_die
je __afl_fork_resume

/* In parent process: write PID to pipe, then wait for child.
Parent will handle timeouts and SIGKILL the child as needed. */

movl %eax, __afl_fork_pid

pushl $4 /* length */
pushl $__afl_fork_pid /* data */
pushl $199 /* file desc */
call write
addl $12, %esp

pushl $2 /* WUNTRACED */
pushl $__afl_temp /* status */
pushl __afl_fork_pid /* PID */
call waitpid
addl $12, %esp

cmpl $0, %eax
jle __afl_die

/* Relay wait status to pipe, then loop back. */

pushl $4 /* length */
pushl $__afl_temp /* data */
pushl $199 /* file desc */
call write
addl $12, %esp

jmp __afl_fork_wait_loop

__afl_fork_resume:

/* In child process: close fds, resume execution. */

pushl $198
call close

pushl $199
call close

addl $8, %esp
ret


But, was it worth it? The answer is a resounding "yes": the stop-at-main() logic, already shipping with afl 0.36b, can speed up the fuzzing of many common image libraries by a factor of two or more. It's actually almost unexpected, given that we still keep doing fork(), a syscall with a lingering reputation for being very slow.



The next challenge is devising a way to move the shim down the stream, so that we can also skip any common program initialization steps, such as reading config files - and stop just few instructions shy of the point where the application tries to read the mutated data we are messing with. Jann's original patch has a solution that relies on ptrace() to detect file access; but we've been brainstorming several other ways.



PS. On a related note, some readers might enjoy this.

How to call java rest web service in soapUI


The following code is an explanation of how you can call a rest web service in java. Below you can find the actual java code and soapUI configuration. We enumerate 3 type of methods namely: POST, PUT and DELETE.


How to call rest web service using soapUI

public class Person {
private int id;
private String firstName;
private String lastName;
}

@POST
@Path("/")
public ActionStatus create(Person postData) {

}

In soapUI
Method=POST
MediaType=application/json
Json String=
{
"firstName" : "edward",
"lastName" : "legaspi"
}

@PUT
@Path("/")
public ActionStatus update(Person postData) {

}

In soapUI
Method=PUT
MediaType=application/json
{
"id" : 1,
"firstName" : "edward",
"lastName" : "legaspi"
}

@DELETE
@Path("/{personId}")
public ActionStatus delete(@PathParam("personId") Long reservationId) {

}

In soapUI
Method=DELETE
Request Parameters=
name=personId
value=1
style=template
level=resource
Your resource should look like: /xxx/{personId}

How to handle CheckBox event in Swift

A very simple checkbox control.

 @IBAction func btn_box(sender: UIButton) {
        if (btn_box.selected == true)
        {
            btn_box.setBackgroundImage(UIImage(named: "box"), forState: UIControlState.Normal)
            
                btn_box.selected = false;
        }
        else
        {
            btn_box.setBackgroundImage(UIImage(named: "checkBox"), forState: UIControlState.Normal)
            
            btn_box.selected = true;
        }
    }


JavaEE Architect - Edward P. Legaspi

If your planning to outsource enterprise level software development, put up an IT company here in the Philippines or simply looking for good developers, feel free to contact me.

You can find more information about me (including my linkedin account) in the link below:
http://about.me/czetsuya

I'm also embedding my CV here.

Simple To-Do List using core data in Swift

Note: The steps 1-9 are all simply about setting-up the controllers. The main "core data" begins the next step after. Here's the finish project from step-1 to step-9.

1.) First, for us to have a better view, we have to disable the size classes. Go to the "Main.storyboard" and click the file inspector. Below you can see checkbox for "Use Size Classes" and from there, you can disable it.


2.) Next we have to embed the "ViewController" inside a Navigation Controller. Inside the "Storyboard" click the "ViewController" and go to "Editor" -> "Embed In" -> "Navigation Controller". The "ViewController" must now have a "Navigation Bar". 


3.) Next we have to add a "Bar Button Item" inside the navigation bar. From the "Object Library", grab a "Bar Button Item" and drag it inside the right corner of the navigation bar. From the "Attribute Inspector", change the "Identifier" to add.

4.) Next we have to add another "ViewController". From the "Object Library drag a "ViewController" inside the storyboard. Next we have to connect the first view controller to the second view controller. Click the bar button item of the first view controller and hold the control key and drag it to the second view controller. A pop-up should appear and click "push".



5.) Now with the second view controller, add a bar button item on the left side the of the navigation bar and change the identifier to "cancel". Repeat the same process for the "done" button except that it should be on the right side. Next we have to add a textfield where we can enter our task, drag a textfield inside the view controller just below the navigation bar. Stretch it from the left margin to the right margin.



6.) The second view controller doesn't have a class yet, so go to "File" -> "New" -> "File" and create a new class named "AddViewController". We have to set it as the class of the second view controller, so inside the storyboard click the second view controller and go to "Identity Inspector" and set the class to "AddViewController".

7.) We can now make an action for the "cancel" button. Inside the second view controller, click the "cancel" button and hold down the control key and drag it inside the "AddViewController" class. Fill up the following information similar below:




Repeat the same process for the textfield, name it "txtField_desc".

8.) To be able to go back to the first view controller from the second view controller, we have to pop the second view controller. Inside the function "btn_cancel" that we've just added, add this line of code:

navigationController?.popViewControllerAnimated(true)

9.)  Now with the "done" button, we have to connect it to the first view controller, however we should first have an "anchor point" from the first view controller. So inside the first view controller, add this line of code:


@IBAction func unwindToFirstViewController(segue: UIStoryboardSegue) {

}

Now we can connect the second view controller to it, click the "done" button and hold down the control key and drag it to the "exit" button and choose the "unwindToFirstViewController".



If you try to run the app from this point, you can now go back-and-forth from the first view controller to the second view controller.

Again,  here's the set-up from step-1 to step-9

10.) Adding the data. Inside the second view controller class, import "CoreData" and insert this line of codes:


override func prepareForSegue(segue: UIStoryboardSegue, sender: AnyObject?) {
var appDel : AppDelegate = (UIApplication.sharedApplication().delegate as AppDelegate)
var context : NSManagedObjectContext = appDel.managedObjectContext!

var newTask = NSEntityDescription.insertNewObjectForEntityForName("Task", inManagedObjectContext: context) as NSManagedObject

newTask.setValue(txtField_desc.text as String, forKey: "desc")

var error : NSError?

context.save(&error)

println(newTask)
println("Object saved")

}

11.) Showing the contents of the data. We need a table view where we put a cell, inside the storyboard, drag a table view to the first view controller.

Next we have to add a cell where we can display the task description, drag a single table view cell inside the table view and change its identifier to "cell".

Next we have to connect the table view datasource and delegate to the second view controller, right click the table view and inside the circle next to the data source, hold the control key and drag it to the view controller. Repeat the same process for the delegate.
(See image below)

Next make an outlet of the table view, similar to textfield, just control-drag the table view to its class and name it "tableView"



12.) Inside the first view controller class, import "CoreData". Then we need an array where we can store temporarily our data.

var taskList : [String] = []

Next, we need the table view to display our data so insert this line of codes:

func loadData(){
println("loading data... please wait...")
var appDel = UIApplication.sharedApplication().delegate as AppDelegate
var context = appDel.managedObjectContext
var error : NSError?

var request = NSFetchRequest(entityName: "Task")
request.returnsObjectsAsFaults = false

var results : NSArray = context!.executeFetchRequest(request, error: &error)! as NSArray
if results.count > 0 {
for res in results{
taskList += [res.valueForKey("desc") as String]
}
}else{
println("no data loaded")
}
}

func numberOfSectionsInTableView(tableView:UITableView!)->Int
{
return 1
}

func tableView(tableView: UITableView!, numberOfRowsInSection section: Int) -> Int
{
return taskList.count;
}

func tableView(tableView: UITableView!, cellForRowAtIndexPath indexPath: NSIndexPath!) -> UITableViewCell!
{
let cell:UITableViewCell = UITableViewCell(style:UITableViewCellStyle.Default, reuseIdentifier:"cell")
cell.textLabel?.text = taskList[indexPath.row]

return cell
}

func tableView(tableView: UITableView!, didSelectRowAtIndexPath indexPath: NSIndexPath!)
{
println("You selected cell #\(indexPath.row)!")
}

Next, for our data to load, we have to invoke it inside the "viewDidLoad" function, inside the function "viewDidLoad" insert this line of code:

 loadData()

Finally, when the user just enter a task, we have to reload the data. So inside the "unwindToFirstViewControlller" function, insert this line of codes:


 taskList.removeAll(keepCapacity: false)
loadData()
tableView.reloadData()

Try to run the app and you should have now a working simple to-do app :)
Here's the final code.

Qualities of the IT Workforce in the Philippines

Small business enterprise would naturally need manpower to start their business rolling. However, the cost of efficient manpower may not be enough to keep the business afloat. As such, it is great news that a small business can start big with outsourcing.

outsourcing.jpg

Outsourcing relating to human resources management is hiring employees outside your locality. Most of the time, these employees telecommute, meaning they provide services or provide you the work online. As such, the basic tools that an outsourced employee should have is a personal desktop computer or laptop with a stable internet connection.

The small and large enterprises can benefit with outsourcing. One of the most outsourced skills in the Philippines is the IT workforce. You can see this with the numerous BPO hubs in the Philippines catering to different business overseas. Putting up a business processing outsourcing hub would entail that the business offers everything for the clientele. Regardless of the outsourcing packages that these BPO or IT workforce offers, they should possess the following characteristics:

Clearly defined purpose. Different clients require different skill sets. As such, each package of services that you offer should contain the purpose of that group. For instance, if you are offering IT workforce that deals with web development, the skill set of these people should be indicated and clearly defined. This will help the clientele choose and decide the appropriate manpower they need for their business.

Foresight. Outsourcing has its ups and downs. Software and programs do encounter bugs from time to time. As such, these should also be included in the set of skills that you offer. This would mean that you have considered what clients may experience during their use of the software or programs. A well-rounded IT employee can provide you samples of bugs and what they have done to address and correct the issues.

Strategy. The workforce should also include a strategy for your business. This may include proposals on the required equipment you need along with programs and software. After the implementation of the physical set up, there should also be a clear cut plan of how the operation of the business should run along with monitoring and projection of outputs. This will provide the clientele with a clear idea of the capital they need and the time that they should expect their returns on investments.

Dynamic. The IT workforce should be dynamic. They should be able to modify set of services that the clientele require with appropriate modifications of strategy to get the required results. This may mean changes in communication, operation or business process. Regardless of the changes, the workforce should be able to cope with the changes and show efficient performance as well as generate desired results.

bpo.jpg

Outsourcing is one of the best ways that the clientele can cut on operational expenses and improve their business. On the other hand, outsourcing also provides additional economic stability of the workforce. It is a win-win solution depending on matching of needs of the clientele with the services being offered by the IT workforce.

Cost of Outsourcing IT

Wondering if you should outsource to save on start-up finances?  An entrepreneur wanting to start a business should consider all the cost and compare it against his return of investment in order to determine if the business is feasible. So does this mean that one has to do a feasibility study before venturing into a business? The answer is both a yes and no.

software box.jpg

For instance, if you will be venturing into a business that has no available information as to its feasibility in a given location and there are no business models that you can use to start your business, then a feasibility study is called for. However, if you are planning to outsource, you can find helpful information over the internet. Some of them are even based on sound research data and experience.

Business entrepreneurs have different reasons for outsourcing. Gartner, Inc, one of the world's leading information technology research and advisory company conducted a survey of 945 IT professionals in 2006 regarding outsourcing. The research results revealed that there are two types of results that business entrepreneurs look for in outsourcing namely:

  • Expect to see significant cost savings
  • Make their company more competitive in the market.

Looking at it closely, the business enthusiast will basically be more competitive when they get significant cost savings through outsourcing. Therefore, these two types of reasons for outsourcing go hand in hand.
Global Business.jpg
There are three issues that may pose a challenge to anyone looking to outsource:

Responsibility issue. When you outsource IT professionals, you will be looking for individuals who will provide the information to you. It is up to the owner to implement the solution. This, however, may vary. For instance, if you are selling software products, troubleshooting will need to be addressed by a couple of people. They may not need training on providing solutions, but they will definitely need training in communicating their solutions through phone or chat to clients. On the other hand, responsibility for website coding or troubleshooting an online system or set of programs may be easier for outsourced vendors since they can readily access the software, program or website.

Cost issue. A business entrepreneur would want to save on cost. This is basically the reason he wants to outsource IT vendors or professionals for his business. As a businessman, you may find outsourcing costly if you are left on your own to deal with what the IT vendors cannot do. This should be addressed early on. The contract between you and the IT vendor should be clear and enumerated in a contract. This can also be addressed by proper as well as sufficient testing and filtering of IT vendor applicants or professionals.

Transition challenges. If you are used to running your business physically, where you go to an office and instruct IT professionals what to do or what you expect, you may find telecommuting challenging. If you’re used to showing what you want by being physically present, you should learn to show what you want by explaining them or using online resources to get your idea across.

Outsourcing needs will depend on the type of business that you have. It is recommended that you tread lightly and carefully as well as consider all possible results and be ready with alternative plans should the first one won’t work.

Bash bug: the other two RCEs, or how we chipped away at the original fix (CVE-2014-6277 and '78)

The patch that implements a prefix-based way to mitigate vulnerabilities in bash function exports has been out since last week and has been already picked up by most Linux vendors (plus by Apple). So, here's a quick overview of the key developments along the way, including two really interesting things: proof-of-concept test cases for two serious, previously non-public RCE bugs tracked as CVE-2014-6277 and CVE-2014-6278.



NOTE: If you or your distro maintainers have already deployed Florian's patch, there is no reason for alarm - you are almost certainly not vulnerable to attacks. If you do not have this patch, and instead relied only on the original CVE-2014-6271 fix, you probably need to act now. See this entry for a convenient test case and other tips.



Still here? Good. If you need a refresher, the basic principles of the underlying function export functionality, and the impact of the original bash bug (CVE-2014-6271), are discussed in this blog post. If you have read the earlier post, the original attack disclosed by Stephane Chazelas should be very easy to understand:


HTTP_COOKIE='() { 0; }; echo hi mom;' bash -c :


In essence, the internal parser invoked by bash to process the specially encoded function definitions passed around in environmental variables had a small problem: it continued parsing the code past the end of the function definition itself - and at that point, flat out executed whatever instructions it came across, just as it would do in a normal bash script. Given that the value of certain environmental variables can be controlled by remote attackers in quite a few common settings, this opened up a good chunk of the Internet to attacks.



The original vulnerability was reported privately and kept under embargo for roughly two weeks to develop a fairly conservative fix that modified the parser to bail out in a timely manner and do not parse any trailing commands. As soon as the embargo was lifted, we all found out about the bug and scrambled to deploy fixes. At the same time, a good chunk of the security community reacted with surprise and disbelief that bash is keen to dispatch the contents of environmental variables to a fairly complex syntax parser - so we started poking around.



Tavis was the quickest: he found that you can convince the parser to keep looking for a file name for output redirection past the boundary between the untrusted string accepted from the environment and the actual body of the program that bash is being asked to execute (CVE-2014-7169). His original test case can be simplified at:


HTTP_COOKIE='() { function a a>\' bash -c echo



This example would create an empty file named "echo", instead of executing the requested command. Tavis' finding meant that you would be at risk of remote code execution in situations where attacker-controlled environmental variables are mixed with sanitized, attacker-controlled command-line parameters passed to calls such as system() or popen(). For example, you'd be in trouble if you were doing this in a web app:


system("echo '"+ sanitized_string_without_quotes + "' | /some/trusted/program");



...because the attacker could convince bash to skip over the "echo" command and execute the command given in the second parameter, which happens to be a sanitized string (albeit probably with no ability to specify parameters). On the flip side, this is a fairly specific if not entirely exotic coding pattern - and contrary to some of the initial reports, the bug probably wasn't exploitable in a much more general way.



Chet, the maintainer of bash, started working on a fix to close this specific parsing issue, and released it soon thereafter.



On the same day, Todd Sabin and Florian Weimer have independently bumped into a static array overflow in the parser (CVE-2014-7186). The bug manifested in what seemed to be a non-exploitable crash, trying to dereference a non-attacker-controlled pointer at an address that "by design" should fall well above the end of heap - but was enough to cast even more doubt on the robustness of the underlying code. The test for this problem was pretty simple - you just needed a sequence of here-documents that overflowed a static array, say:


HTTP_COOKIE='() { 0 <<a <<b <<c <<d <<e <<f <<g <<h <<i <<j <<k <<l <<m; }' bash -c :



Florian also bumped into an off-by-one issue with loop parsing (CVE-2014-7187); the proof-of-concept function definition for this is a trivial for loop nested 129 levels deep, but the effect can be only observed under memory access diagnostics tools, and its practical significance is probably low. Nevertheless, all these revelations prompted him to start working on an unofficial but far more comprehensive patch that would largely shield the parser from untrusted strings in normally encountered variables present in the environment.



In parallel to Tavis' and Florian's work, I set up a very straightforward fuzzing job with american fuzzy lop. I seeded it with a rudimentary function definition:

() { foo() { foo; }; >bar; }<br />

...and simply let it run with a minimalistic wrapper that took the test case generated by the fuzzer, put it in a variable, and then called execve() to invoke bash.



Although the fuzzer had no clue about the syntax of shell programs, it had the benefit of being able to identify and isolate interesting syntax based on coverage signals, deriving around 1,000 other distinctive test cases from the starting one while "instinctively" knowing not to mess with the essential "() {" prefix. For the first few hours, it kept hitting only the redirect issue originally reported by Todd and the file-creation issue discovered by Tavis - but soon thereafter, it spewed out a new crash illustrated by this snippet of code (CVE-2014-6277):


HTTP_COOKIE='() { x() { _; }; x() { _; } <<a; }' bash -c :



This proved to be a very straightforward use of uninitialized memory: it hit a code path in make_redirect() where one field in a newly-allocated REDIR struct - here_doc_eof - would not be set to any specific value, yet would be treated as a valid pointer later on (somewhere in copy_redirect()).



Now, if bash is compiled with both --enable-bash-malloc and --enable-mem-scramble, the memory returned to make_redirect() by xmalloc() will be set to 0xdf, making the pointer always resolve to 0xdfdfdfdf, and thus rendering the prospect of exploitation far more speculative (essentially depending on whether the stack or any other memory region can be grown by the attacker to overlap with this address). That said, on a good majority of Linux distros, these flags are disabled, and you can trivially get bash to dereference a pointer that is entirely within attacker's control:


HTTP_COOKIE="() { x() { _; }; x() { _; } <<`perl -e '{print "A"x1000}'`; }" bash -c :

bash[25662]: segfault at 41414141 ip 00190d96 sp bfbe6354 error 4 in libc-2.12.so[110000+191000]



The actual fault happens because of an attempt to copy here_doc_eof to a newly-allocated buffer using a C macro that expands to the following code:


strcpy(xmalloc(1 + strlen(redirect->here_doc_eof)), (redirect->here_doc_eof))



This appears to be exploitable in at least one way: if here_doc_eof is chosen by the attacker to point in the vicinity of the current stack pointer, the apparent contents of the string - and therefore its length - may change between stack-based calls to xmalloc() and strcpy() as a natural consequence of an attempt to pass parameters and create local variables. Such a mid-macro switch will result in an out-of-bounds write to the newly-allocated memory.



A simple conceptual illustration of this attack vector would be:

char* result;<br />int len_alloced;<br /><br />main(int argc, char** argv) {<br /><br /> /* The offset will be system- and compiler-specific */;<br /> char* ptr = &ptr - 9;<br /><br /> result = strcpy (malloc(100 + (len_alloced = strlen(ptr))), ptr);<br /><br /> printf("requested memory = %d\n"<br /> "copied text = %d\n", len_alloced + 1, strlen(result) + 1);<br /><br />}<br />

When compiled with the -O2 flag used for bash, on one test system, this produces:


requested memory = 2

copied text = 28



Of course, the result will vary from system to system, but the general consequences of this should be fairly evident. The issue is also made worse by the fact that only relatively few distributions were building bash as a position-independent executable that could be fully protected by ASLR.



(In addition to this vector, there is also a location in dispose_cmd.c that calls free() on the pointer under some circumstances, but I haven't really really spent a lot of time trying to develop a functioning exploit for the '77 bug for reasons that should be evident in the text that follows... well, just about now.)



It has to be said that there is a bit less glamour to such a low-level issue that still requires you to go through some mental gymnastics to be exploited in a portable way. Luckily, the fuzzer kept going, and few hours later, isolated a test case that, after minimization, yielded this gem (CVE-2014-6278):


HTTP_COOKIE='() { _; } >_[$($())] { echo hi mom; id; }' bash -c :



I am... actually not entirely sure what happens here. A sequence of nested $... statements within a redirect appears to cause the parser to bail out without properly resetting its state, and puts it in the mood for executing whatever comes next. The test case works as-is with bash 4.2 and 4.3, but not with more ancient releases; this is probably related to changes introduced few years ago in bash 4.2 patch level 12 (xparse_dolparen()), but I have not investigated if earlier versions are patently not vulnerable or simply require different syntax.



The CVE-2014-6278 payload allows straightforward "put-your-commands-here" remote code execution on systems that are protected only with the original patch - something that we were worried about for a while, and what prompted us to ask people to update again over the past few days.



Well, that's it. I kept the technical details of the last two findings embargoed for a while to give people some time to incorporate Florian's patch and avoid the panic associated with the original bug - but at this point, given the scrutiny that the code is under, the ease of discovering the problems with off-the-shelf open-source tools, and the availability of adequate mitigations, the secrecy seems to have outlived its purpose.



Any closing thoughts? Well, I'm not sure there's a particular lesson to be learnt from the entire story. There's perhaps one thing - it would probably have been helpful if the questionable nature of the original patch was spotted by any of the notified vendors during the two-week embargo period. That said, I wasn't privy to these conversations - and hindsight is always 20/20.