Solar VPS Blog

The School of Virtualization: The Benefits of Cloud Computing for K-12 Classrooms

Little cute girl As cloud computing grows, much of the focus has been centered on how it allows more sophisticated interaction between businesses and clients, along with more efficient internal productivity. The IT approach goes well beyond business productivity and application complexity, though. The system could potentially save lives: it allows medical researchers to process huge amounts of data affordably and quickly, as we covered in a previous piece. It also is changing the way in which we teach and learn. Savings & broad benefits for education and other sectors Last year, education technology site THE Journal conducted surveys of schools, as well as various other types of organizations, to gauge the growth of cloud computing. Its finding suggest that the strategy doesn’t just play a strong role in academia: it actually is more impactful in some ways than it is for the average economic sector. The publication reported that the technology would reach 25% of IT budgets for K-12 schools by this year and would make up more than a third of budgets by 2017. A CDW Government study revealed that the top use for the solution in K-12 environments is rather simple: storage, which represented 40% of use. Two other types of applications were close behind storage, though: collaborative and conferencing tools, at 36%, and office workflow applications, at 33%. The breakdown of the model’s adoption in childhood education is similar to that seen elsewhere in the for-profit, government, and nonprofit spheres: storage and conference/collaboration are generally the most widely used applications, as a 2013 report from CDN clarifies. Additional popular reasons it is used across all types of organizations include its raw processing power, along with operational and messaging tools. Those organizations that have deployed distributed virtualization solutions – among all organizations, not just education – had reduced their costs on applicable services by 13% in 2013, with that number expected to reach 17% this year and 25% in 2017. Education savings are actually better than the average at 20% projected for 2014, 27% projected for 2017. Benefits of the cloud for all fields are understood through analysis of the study’s survey responses, as indicated below. Note that the percentages exceed 100% because respondents were asked to check “all that apply”:
  1. Efficiency/productivity – 55%
  2. Mobile access for staff – 49%
  3. Creativity/innovation – 32%
  4. Reduced strain on tech department – 31%
  5. Research/development/deployment of new offerings – 24%.
Additional benefits of the cloud for education Pearson School Systems notes a number of ways that the cloud is especially helpful to the field of education. Pearson’s thoughts are applicable in part to general academic IT but also specifically to the classroom:
  • Real-time backup – The approach allows for real-time saving of materials so nothing is lost if a tool on an individual teacher’s device fails. Regardless of any problems that arise on one computer, documents are still intact and can be accessed from another device.
  • Storage – Teachers and students are able to store any types of files within the cloud. Due to the strategy’s elasticity and affordability, large files do not pose problems.
  • Access – Teachers and students can access files from anywhere – in the classroom, at home, or through mobile devices.
  • Collaboration – The model makes it easy for teachers to work together on projects and for students to work on group assignments. Anyone with access can revise documents, with the new version reflected simultaneously. This aspect allows for ideas to be enhanced by teamwork and for lesson plans to be built synergistically.
  • Paper and time reduction – As the IT system makes it easier to conduct tasks through the Internet, teachers no longer need to expend time and budget on printing and copying. Students can view and sometimes complete homework online, and they can access reading materials and other educational resources as well.
  • Homework – Specifically with regards to homework, teachers can detail projects and assignments using virtualized tools. Students can access task pages from anywhere and post their work in response. The work is easily saved and can be graded by the teacher thereafter. Assignments don’t need to be collected in class, saving time and making flu bugs less likely to spread.
Dr. Matthew Lynch addresses benefits of the cloud for schools as well in Education Week, covering some different terrain. He states the advantages of the cloud as follows:
  • Improved communication – Lynch notes that a portal such as Edline allows everyone involved in an academic setting access to classroom materials. Parents can check their children’s assignments and grades from any location, at any time. Teachers can make announcements to everyone involved in the class. It also allows all parties easy, organized, and reliable access to past and future assignments. If desired, forums can be created to enable direct interaction between parents, teachers, and/or students.
  • Disaster preparedness – One crucial component of any IT infrastructure is disaster planning. Schools amass data about students that helps determine paths forward for all involved. If the records of the school are compromised by any type of disaster, whether they are stored on file cabinets or on hard drives, all that student data could be lost. By virtualizing all that information, the school knows it will always be available within a system structured to allow many redundancies.
  • Centralization & ease – As indicated briefly above, one of the strongest attributes of this strategy is its ability to integrate various programs and sets of data. The approach doesn’t require an investment in hardware, and billing is based on use, making it a cost-effective and simple solution to deploy.
  • Recovery – With this model, crashes and loss of data are quickly becoming a thing of the past. The system operates smoothly despite any failures of specific hardware or software components. If data is lost at one location, getting a backup copy is fast and simple.
Cloud computing is gradually taking hold throughout the field of education, as evidenced by 25% of K-12 budgets. The benefits of this model are manifold, with efficiency seen as especially critical by IT executives. The model makes homework assignments more accessible; enhances disaster preparedness; and fosters communication between parents, teachers, and students.  

What is Platform-as-a-Service (PaaS)? General Idea & Debate on Specifics

touching cloud Overview of types of distributed virtualization The cloud is essentially the name given to various services that are provided through the Internet. It allows for incredibly fast pace at a low cost, utilizing available resources on numerous servers, through administrative software that sends traffic and data via the most efficient channels. Mobile applications have proliferated due to this technology; scientific research has experienced an enormous boost; and the Industrial Internet has been granted centerstage via virtualization of production monitoring. Cloud systems, though, are used for many different purposes beyond the above. There are a couple of different ways to categorize instances of this form of computing – which is not really one entity but a variety of situations in which the same basic IT strategy is used. One is by its basic technological construction, essentially a matter of the machine architecture combined with the users accessing it. Examples within this category include the below (keeping in mind that these are general definitions):
  • public – vast numbers of users accessing the same, massive pool of servers, often located in various worldwide locations;
  • private – virtualization across a number of different machines within a small pool of servers controlled by one company (essentially a VPS – virtual private server – distributed for optimal performance and redundancy);
  • hybrid – use of traditional IT in conjunction with cloud computing, which often involves an in-house legacy system combined with a public system;
  • community – similar to the way a hybrid structure combines old and new technology, this model is used by a number of organizations that have the same IT concerns, such as compliance and/or heightened security.
The other basic way to divide distributed virtualization technology is along lines of function. Examples within this category include the following:
  • Infrastructure-as-a-Service (IaaS) – a model that utilizes data storage and virtual servers, so you have an entire hosted environment for your business;
  • Platform-as-a-Service (PaaS) – operating system, database, Web server, and an interface for programming operations;
  • Software-as-a-Service (SaaS) – the simplest and most widely used type of the computing strategy, SaaS simply allows users access to applications through the Internet (such as an email service, graphics program, etc.).
Additional, less common types of distributed virtualization include API-as-a-Service (APIaaS), Data-as-a-Service (DaaS), Security-as-a-Service (SECaaS), etc.. Platform-as-a-Service (PaaS) – basic definition, advantages & challenges As noted above, this type of service allows users access to servers, an operating system, storage room, and bandwidth via the Web. Through the systems available, a business can use the resources of a virtual machine (VM) as a basis to run and/or develop applications. PaaS benefits developers by allowing upgrades to the operating system and applications without the need for downtime. Developers can collaborate from anywhere across the globe, accessing the system through any Internet connection: it provides a setting for the congregation of any tools and users desired. It’s less expensive to create and maintain, because the entire system is provided by one source rather than through numerous data centers serving separate purposes. Everything is automatically integrated. Development is simplified, with one hub serving the needs of all involved. One potential negative of this type of service is getting stuck in a proprietary model. That issue can be avoided by determining your liberty and discussing the possibility of future migration upfront with the vendor. A second challenge that can be experienced with this computing model is customization. When any service is provided, parameters are not always flexible enough to provide an adequate solution for all customers over time. Categories of Platform-as-a-Service This type of computing model is available in a variety of different forms, with five standard variations on the service:
  1. Add-on development – These environments utilize specific software to underpin their platforms. In other words, subscribers to this model are using Software-as-a-Service (SaaS) within PaaS. Businesses must typically pay to use the software alongside the platform expenses.
  2. Standalone development – Services offering this model are not reliant on specific software. Businesses are free to develop using whatever software they choose, allowing more freedom but also requiring users to understand what options are best for their projects.
  3. Application delivery-only – As its name suggests, companies providing this service do not make sure that applications are working correctly with any forms of testing or debugging. Instead, users are merely getting access to the application.
  4. Open platform – This model does not include servers or any kinds of resources. Instead, it gives access to open source programs – which are typically free, fully customizable, and use a broad license. Certain open systems give businesses full compatibility so that they can use whatever machine, OS, database, and language they want.
  5. Mobile platform  (mPaaS) – This type of model is designed specifically for mobile devices, making it possible for businesses to quickly and effectively organize and implement bring your own device (BYOD) programs, integrating personal and business computing of employees.
Disagreement re: Exactly What PaaS Is The above descriptions may make it sound as if the parameters of this type of technology have been irrefutably determined. However, as with any developing field, terminology and understandings of the concepts are still ambiguous and in flux. Executives at several major IT organizations spoke on a panel at the 2014 Cloud Connect Summit in Las Vegas. All of those present had a somewhat different way of defining the distributed virtualization approach. Here were the basic ways four of the experts described Platform-as-a-Service:
  • Mark Russinovich, Microsoft: the integration of software with its environment, rather than the placement of application code within a disparate server;
  • Margaret Dawson, HP: a complete and cohesive virtualized environment for the development of applications;
  • Jesse Proudman, Blue Box Group: a virtualized hosting infrastructure that facilitates a broad array of application and container services;
  • Krishnan Subramanian, Red Hat: a platform built to scale with its applications.
Cloud computing is a diverse set of technological practices that allow companies to take advantage of server virtualization and resource distribution. PaaS is one of the three most common forms of this IT strategy. It is fully integrated with its infrastructure and applications, enhances collaborative potential among users, and makes upgrading seamless. Various systems of delivery exist. It’s also a diverse enough field that not everyone agrees on how to define it. Companies can agree, though, that this type of system benefits companies with its inherent, integrated scalability.

What is Cloud Integration? 6 Tips to Accomplish it Successfully

Piece of the Jigsaw According to InfoWorld, one of the major trends throughout 2014 will be cloud integration. As its name suggests, this process seeks to combine all elements of an organization that are stored in distributed virtual systems into one central database. This technique is growing in popularity because many environments, especially software-as-a-service (SaaS) applications, tend to have their own storage components. Businesses can end up with data isolated in various architectures rather than all in one large pool. Many companies have run into this issue in the past with dedicated solutions and don’t want to experience the same problem again. The traditional issue arose due to information silos, systems that were incapable of operating effectively with other applications within the same infrastructure. Distributed virtualized models can turn into silos, in effect, when various data (related to accounts, inventory, and other elements) exist in a number of different scenarios. Various cloud integration tools are now available on the market, such as MuleSoft, SnapLogic, and Cordys. Exploring the nature of this practice and tips for deployment can help you pull off the transition without difficulty. What is cloud integration? Essentially what this strategy allows you to accomplish is to configure a number of different programs to feed, and have full access to, the same data store. The applications can interact with each other or via an independent tool. The benefits of this type of system, when compared to previous data models, include these capabilities:
  • all users have access to completely up-to-date (real-time) data from any PC, cell phone, or tablet;
  • all data (that users have permission to see) is accessible through any Web connection;
  • numerous applications combine their data, as seen with integration of contact details and calendar appointments within the Google environment;
  • the same login credentials are used to access the various software components that are being combined;
  • administrative information is transmitted seamlessly between your diverse set of applications;
  • because the data is not siloed, you can enhance its integrity and avoid the issue of conflicting data redundancies (repetition of accounts or other variables that are incongruent); and
  • scalability is built into the approach, so you don’t need to be concerned with limitations – regarding how many applications or users are involved – that could otherwise inhibit growth in the future.
Notably, this computing tactic is not just a solution for companies that have run into problems related to data in insular pockets. Instead, it is a solution that anyone considering software-as-a-service (software provided by a third party and used through the Web) can consider so that their adoption of SaaS solutions is smooth and allows full accessibility as the business grows. Cloud integration tips Without using this tactic, your IT team may code each application as a one-off solution. This makes management difficult. Since distributed virtualization holds the promise of efficiency, elasticity, and cost-effectiveness, you don’t want to lose any of those advantages by creating or maintaining disparate environments. Here are a few tips from Jitterbit and IBM: 1.    Make a plan. Before you consider cloud integration itself, create a general plan for implementing distributed virtualization in general. You want this plan to include not just the tasks that must be completed but the value you hope to attain, providing a sense of expectations. 2.    Figure out what is being pulled together. Before you think about the data itself, start with a comprehensive consideration of what users are accessing your current applications or ones you plan to adopt. Consider the various kinds of data being used – such as billing information or products. Think about the places where you want each type of data to be made available and the application that would best serve as your central hub. Determine whether it is necessary to have that data updated moment-by-moment (real-time), daily, or weekly. You also want to pay attention to the advantages for operations, along with the specific individuals and populations that will be facilitated by the process. Bear in mind that this project does not need to be conducted completely, in one sweep. You can test it in one department or segment of your company and then apply the solution throughout the company once it has proven successful. 3.    Model effectively. Make sure the IT tasks are appropriately suited to the models you are assigning to complete them, so that you don’t run into technical snags during or following deployment. 4.    Take everything into account. Make sure that you know the extent to which traditional systems and distributed virtual systems can be combined, so that you are fully aware of the capability of the solution. 5.    Consider the technical aspect. Different cloud integration solutions, needless to say, use different approaches to bring together the various component parts. Understand how the process is being accomplished. For example, IBM centers their approach on delivery, which the company claims is effective at streamlining your efforts. When considering the technical side, pay specific attention to the API. Generally, distributed virtual services contain full-featured APIs, often built on SOAP or REST technology. However, the constraints on specific services can range tremendously – the way they are organized, any security parameters, and ceilings to data flow. A general sense of the APIs involved with each system makes it simpler for you to combine the data into one central location (for access, although the system itself is widely distributed), with expansive resources for processing. Like any software, a high-quality API experiences regular updates. If you individually script the systems to avoid the cost of a service when completing this process, you can get everything immediately into place and functional; but it’s difficult to keep up with the changing code of the APIs. Jitterbit compares conducting this project through line-by-line coding to wiring your entertainment system carelessly: when you get a new piece of equipment (like a new piece of code), “Don’t get back [behind your TV] and start wondering which wire goes to where.” It’s not impossible or unwise to script yourself, but know that the coding will require regular maintenance. 6.    Use the full strength of what you have. Moving your data into one pool is not supposed to be excessively complicated: you aren’t reinventing your systems. Whatever you already have in place within your environment can be utilized to expedite the transition. Analysts of the distributed virtualization industry view cloud integration as a top trend for businesses in 2014. It enhances organization and accessibility, resolves conflicts, and bolsters integrity of data. By using a plan that considers the types of users and types of data that will be affected by the process, along with a consideration of modeling and technical aspects, you can quickly and effectively combine your various data stores into a unified whole.

Trusted by the US Department of Defense, 5 Reasons Cloud Security is Stronger Than You Might Think

Internet security road sign Cloud computing is considered a highly affordable solution that processes content through an incredibly reliable, redundant structure. Beyond those characteristics, the technology operates at a speed that “often enables it to process … data faster than a supercomputer,” says Indiana University (IU) computer scientist Geoffrey C. Fox. That comment is especially compelling since IU unveiled Big Red II, the most powerful dedicated university supercomputer worldwide, in 2013. Despite the strengths of distributed virtualization, cloud security is not considered its strong point. However, Forbes published a Microsoft report fully three years ago (March 2011) noting that the Pentagon was using cloud security for its systems. Furthermore, specific bullet-point explanations of the strength of user protections and data safeguards within these models of information storage and processing – as presented by ScanSource and Mashable – bolster the legitimacy of the perspective that security is a nonissue or potentially even an advantage. US Military’s Use of the Cloud Keith Alexander was in a powerful position in the IT security field in 2011. He was the Director of the National Security Agency, Commander of US Cyber Command, and an active four-star general of the US Army. As covered by Army Times, Alexander spoke at a House Armed Services subcommittee hearing regarding a 2012 budgetary request of $159 million for the Cyber Command to conduct operations. The Microsoft-Forbes piece quotes Alexander stating that cloud security was the best way to protect the Department of Defense’s infrastructure. This acceptance by the world’s largest military – with more than 8 times the annual budget of the 2nd strongest military, Russia – is notable: the typical “pro vs. con” for distributed virtualization is cost-effectiveness vs. poor security. In contrast to that expectation, Alexander mentioned that insider threats might seem to be a particular point of weakness for this IT structure. He hinted to the fact that no computing framework can be made completely safe from criminal interference, but that the cloud has specific strengths: “The controls and tools that will be built into the cloud,” Alexander said, “will ensure that people cannot see any data beyond what they need for their jobs and will be swiftly identified if they [attempt] unauthorized [use].” What makes these architectures particularly strong? What’s the argument that cloud security should be taken seriously? It’s multi-faceted, containing the following core elements:
  1. Divide and conquer. Everyone knows to diversify their financial portfolios, to divide their eggs into separate baskets, to broaden their skill-sets: all these are tactics to mitigate risk. Hence the concept of distribution. In fact, it’s best to push the information away: you don’t want your data near you but at a distance. IT professionals are in consensus, reports ScanSource, that the most likely cyber-security danger you will face is your own workforce. It’s simply much easier for insiders than outsiders to damage your business. The wrong server might mistakenly get taken down to be repurposed, resulting in major data loss. Employees, former employees, and affiliates represent major threats to your business: in 2006, a study by the Department of Trade and Industry and PricewaterhouseCoopers found that 6 out of every 10 security breaches came from one of those three sources.
  2. Fight denial-of-service. Strong security systems can protect you against the majority of threats, but brute-force attacks remain problematic. You can protect your system if you have substantial safeguards established at the perimeter. However, as Mashable notes, there is a valid reason why people started storing their money at banks rather than in their houses: stronger protections. Similarly, remember that cloud hosting providers specialize in security, with investment in expertise and resources outdoing the efforts of the typical private data center. Using distributed virtualization effectively outsources your DDoS (distributed denial-of-service) protections to a reliable third-party.
  3. Celebrate randomness. In a distributed virtualization scenario, information is standardly concealed. The technology encrypts data both while it is stored and when it leaves the system. That level of protection is not always true of non-cloud environments. To demonstrate the difficulty of cracking public-cloud encryption, ScanSource uses the analogy of a sold-out football game at a stadium that fits 50,000 people. Everyone in attendance is a middle-aged man wearing the same outfit, a jersey with a different number on their backs. You have two minutes to figure out the number written on the guy who fits your description. Before you even know who that is, though, you have to determine the cryptographic algorithm. Good luck.
  4. Distribute your data while confining your security components. Mashable notes that making sure data does not fall into the wrong hands is critical to cloud security and any IT safeguards. When you virtualize your systems, your applications and the information pertaining to them all stay in one safe setting. The experience of each user is similar to that of a traditional system and often preferable in terms of application responsiveness. For heightened security, all data is encrypted not just in motion but at rest, and the keys remain on your equipment at all times. Automation becomes more resilient and error-free, with less need for human intervention.
  5. Upgrade on the fly. Traditional computing always involve some amount of downtime to upgrade your systems to the most recent versions. Distributed virtualization allows scripts to be repopulated in the background as you work. Any loopholes in the security are fixed in a much shorter window of time than with legacy infrastructures, where updating a system means pulling it offline, performing the installation, rebooting, and testing. Because it’s organizationally challenging, time-consuming, and annoying to complete this process each time it’s needed, ScanSource argues that “almost no one” keeps up with updates in the traditional model. The new systems automatically improve protection in this way.
Cloud security is much stronger than many users have been led to believe. Even the US Cyber Command and National Security Agency believe distributed virtualization is a safe and reliable way to store and transmit data. Such security features as standardized encryption, automated upgrades, and placement of your data at distant locations create a stronger environment than was possible through traditional means.

Top 2014 Virtualization Topic: How to Integrate the Cloud into Your Business

puzzle Forbes published an article this month on three unexpected trends in cloud computing. The report notes that distributed virtualization is, somewhat shockingly, just getting started: businesses will spend approximately 6 times more than they do currently on the technology by 2017. The piece notes that the driving forces behind adoption of the systems are changing over time: two of the trends listed are related to shifts in business use. These two trends are closely related. First, rather than simply using the new form of computing to reduce their expenses throughout their infrastructure, many companies appreciate its possibilities to expand their IT capabilities with new tools for employees and consumers. Deutsche Bank is a prime example, using distributed virtualization to create a high-speed network with more than 4000 worldwide partners, for enhanced automation and more predictable cash flow. Second, Geoffrey C. Fox of Indiana University – where the supercomputer Big Red II was revealed last year – has noted that the processing speeds offered by distributed virtualization are better than those of supercomputers in many cases. Along those lines, companies want to take advantage of the special properties of the technology – specifically, it’s incredible pace. Organizations want to use the platforms particularly for innovation and development. T-Mobile is exemplary of this trend: the company is enhancing its brand satisfaction by incorporating Twitter and Facebook feeds into its customer service portal, for immediate complaint response. Third, companies expanded into these new systems experimentally in many cases. Now the technology has been more broadly accepted: its savings and performance capabilities are proven, and its security is widely trusted – even by the US Department of Defense. Now that the expectations for the technology have stabilized, companies are looking for ways to combine the new systems with legacy systems. They are looking for ways to unify their environments for easier management. The third trend has been identified by InfoWorld as well. Because it is being mentioned various places and is obviously on the minds of many entrepreneurs and IT executives, it deserves further exploration. Why combining the systems is so important Interconnectivity of these technologies is really a growing trend: it’s not an issue that just popped up this year. Last April, the Wall Street Journal noted that information technology professionals liked the new technology for its speed, update automation, and budgeting predictability/adaptability. However, integration challenges were a common complaint. Companies are starting to choose cloud hosting providers (CHP’s) based off of how easily the solution can be interwoven with the establishment’s current architecture. Problem-free combination is necessary for businesses because in order for an organization to operate optimally, data must transfer seamlessly between each of the company’s various systems. That’s not just true between distributed virtual systems and traditional ones. It’s true of the interaction between one virtual component and another. An application that shares files should be connected with a sales application and project management application without any glitches. Ideally the user logs in once and is able to enter all areas of the systems to which they have access, with all changes saved in real time. Integration becomes a greater concern as distributed virtualization is used more widely. In other words, the incredible sixfold expansion of the computing model that’s projected (see above) inevitably leads to a need to interconnect it with the general infrastructure. Needless to say, it’s not mandatory that a comprehensive plan is in place. However, the integrity of data is compromised when the system is disparate and not updating properly from moment to moment. When data is not the same from one environment to the next, the advantages of the computing approach are not being properly utilized. The following three strategies discussed below are recommended by Deloitte Consulting LLP (as referenced by the WSJ). Although the scenario of two virtual environments is used in these descriptions, the general game plan is the same regardless the specific systems you are connecting. Interconnection strategies for cloud computing applications 1. Connecting between virtual systems In order for two distributed virtual systems to communicate with one another, one of the two must contain a layer that enables it to interchange data with external programs. This component allows the system to organize data in a format that is meaningful to the external entity (to translate, in a sense). It also gives the virtual machine the ability to contact the systems charged with data transfer and interaction between software. The layer also encrypts the information for security purposes. Typically this layer is built into a distributed virtual environment; however, it may have a simplistic or glitchy design that could cause frustration and lost time. If the layer is more complex, you will be able to adjust your various environments as needed, without hassle and at reduced expense. Adjustments become more necessary for worldwide systems, in which information must be changed based on geographical location (which in some cases involves true translation or conversion for understandability). Connecting virtual systems directly in this manner is ideal when one of the environments contains a particularly well-developed integrative layer, and when the two platforms are performing similar tasks within the overall scope of the business (such as a human capital management – HCM – system and a payroll system). The proprietary nature of some environments, though, can make this process difficult. 2. Connecting via middleware You can use your current middleware capabilities to combine two virtual systems. The middleware serves the same function as the layer of the above strategy: it gets information from one environment, transforms it into the appropriate structure, and transmits it to the second environment. This strategy works well for businesses that have a mature middleware system and on-staff expertise to connect multiple virtual environments. This route is often more cost-effective than any other solution: if the middleware already exists, all you are paying for is development of the connection and its maintenance within that extant system. It is also well-designed for adaptability and scalability. 3. Connecting via virtualization You can also use a distributed virtual environment to connect to other ones. In other words, a cloud computing system serves as your integrative middleware. This market is still developing, so it may be cost-prohibitive. However, if you don’t have professionals immediately available who can either connect the virtual systems directly or through middleware that is already in place, this option may make sense. Meaningful connection for usability & efficiency Interconnectivity of your various systems is critical to optimizing the effectiveness of your business’s infrastructure. You want all your data to be reliable in real time, immediately reflected across all the components of your environment. You can connect systems directly, through traditional middleware, or through virtual middleware. Any of the three options can make the best sense, depending on your particular situation.

Pros & Cons of WordPress

3d wordpress illustration WordPress is an incredibly popular blogging platform and content management system (CMS). In fact, 33% of the world’s websites are built using a CMS, and 19% (over half of the CMS sites) use WordPress (compared to 3% and 2% using Joomla and Drupal, respectively). That totals 69 million sites worldwide. Out of every 100 new sites built in the US last year, 22 of them use WordPress. All of those figures point to incredibly wide acceptance. Nonetheless, there are pros and cons to using the platform. Some of the major positives and negatives of the WordPress CMS are described below. Pro: independence WordPress was built primarily to serve as a blogging platform. However, it functions as a reasonably versatile CMS. As noted by Pragmatic Web, the CMS capability allows you to update the site without the need to work with a web developer, which can be challenging especially in the case of minor changes or time-sensitive updates. (Notably, any CMS offers this basic strength.) Pro: free As noted by Fig Creative, if you host the WordPress site yourself (as is typical), the software is completely free to donwload, use, and update. This aspect is particularly compelling for SMBs and nonprofits. It’s also ideal for some testing and developmental situations. Pro: open source A core element of WordPress that relates to its no-cost model is that it is open source. As Media Realm indicates, it uses a GNU General Public License, which essentially allows you to adapt the code to fit your needs. You don’t have to pay for WordPress now or at any point in the future, but you also have the freedom to dig inside it and change it. It’s not based on a proprietary model. You don’t experience vendor lock-in if you want to move elsewhere down the line. You simply use a migration tool to depart. Pro: built to build As with the other major content management tools, WordPress offers a massive library of plugins to give your site additional functionalities. Some themes (site templates) contain built-in tools as well. Pragmatic notes that, because WordPress is so common, it’s also simple to find a developer if there’s anything you want that you can’t find publicly through the site. Pro: simple WordPress essentially helped to democratize website building. It’s a tool intended to allow people without developmental or strong technical skills to create websites, so its usability is (generally speaking) fantastic. Plugins, providing such components as portfolios and online stores, are simple to install (though it’s critical to update and test them when new versions of WordPress are released). The themes make design a snap as well, mentions Fig. Pro: vast pool of users As established by the stats in the introduction, the scope of the WordPress community is enormous. As of April 2014, there are over 30,000 plugins and almost 2500 themes. However, per Media Realm, where the community really becomes powerful is when you are customizing the software. This facet becomes clear in online forums, such as WordPress Answers, with its tens of thousands of support questions and answers. WordPress Codex also serves as an extensive general manual and information source. Con: updates On the downside, updates are released on a regular basis, as described by Pragmatic. You must update the CMS itself, as well as your theme and any plugins you have installed. There are many horror stories related to upgrading a WordPress site: Miriam Schwab of WordPress Garage, for example, almost lost the majority of her site while attempting an upgrade. It’s easy to make mistakes in the process or to experience glitches that are out of your control. Con: security Fig notes that security is an issue with WordPress. Indeed, last year a botnet inundated WordPress sites, causing wide-scale mayhem. Open source software is compelling to hackers for the same reason it’s compelling to developers: transparency. However, the main WordPress users who need to worry about hackers are those who don’t take reasonable precautions. Use a password generator to make it more difficult for hackers to enter your site. Also change your administrative username to something else so that cybercriminal software can’t enter that management account as easily. Con: defining accessibility Drupal and other full-scale content management systems give you the direct ability to define user roles and permissions. With WordPress, you must perform this task via a plugin. Media Realm notes that it’s a little “hacky” not to have such a fundamental aspect of security built into the core code of WordPress. Con: external script Although there is a huge supply of tools from which to choose in the templates and plugins, those capabilities are developed by independent parties. Pragmatic notes that you don’t know exactly what you’re getting when you use these third-party offerings. Granted, reviews should give you a good sense of quality. Con: online sales If you have used eCommerce software such as Volusion or Shopify, you are familiar with a full feature set when you establish an online store. Creating a store within your WordPress site will not give you as many options as with strong systems that have been designed specifically for web sales, remarks Fig. However, it’s not a bad way to get your eCommerce underway, as long as you understand you may need to eventually transition to something else. Con: not built for size by default WordPress is used by huge sites, such as Ebay, Yahoo, Digg, and Ford. Be aware, though, that these sites are optimized with specially designed tools. Determining a strong caching plugin, for example, can be challenging. You also need to have an incredible hosting solution so that you can scale rapidly if you get a sudden surge in traffic. *** Any system has pluses and minuses, and WordPress is no exception. Many people appreciate WordPress because it doesn’t cost a penny and is open source, allowing each user a significant breadth of freedom. On the downside, security can be an issue if you aren’t prepared, and WodPress isn’t built ready-made for eCommerce or scalability. Most of the negatives with the CMS can be overcome, though. You don’t have to use WordPress for your entire site but to the extent it makes sense. For growth, a viable cloud hosting solution with apps “on demand” gives you access to resources for expansion.

Identity Management & the Cloud

Computer keyboard with fingerprint One of the top trends within the field of cloud computing for 2014 – according to Business 2 Community – is identity management. Businesses must limit access both to their on-site networks and to their software-as-a-service (SaaS) environments. This aspect of security within distributed virtual platforms has been a weak point for some organizations. This year cloud hosting providers (CHP’s) with adequate protections seek to stand out from those with less stringent security protocols. What is identity management? Identity management is also sometimes referred to as identity and access management (IAM). The field of IAM has three key activities:
  1. creating identities and administrating the identities themselves;
  2. granting appropriate privileges to identities; and
  3. managing access so that identities are properly validated or denied.
Essentially, IAM refers to the policies and procedures surrounding the administration of user accounts within an information technology system. Originally in the field of computing, any sensitive environment required usernames and passwords controlled by the tech team. About 20 years ago, as the Internet became widespread, the number of users exploded. The task of managing login credentials was gradually transitioned from the IT staff to the public. Now that any user is capable of resetting a personal password via email, the general administration of access mechanisms has expanded. IT systems have also become increasingly critical aspects of personal and business productivity and interaction. As this process has occurred, identity management has become more confusing. The Internet was built as a vast pool of standalone networks, with private areas only accessible to those with the appropriate permissions. Unfortunately, now that users have access to dozens or even hundreds of different environments, they now have a wide variety of credentials to manage. Identity management solutions that integrate logons for all the various systems into one central hub have become popular with consumers. Authentication systems of this sort are being adopted by businesses as well, for the same basic reasons. Cloud computing is being used to manage IDs – via IDaaS (identity-as-a-service) – for two basic reasons:
  1. Security in distributed virtual environments has become more sophisticated, as evidenced by implementation of the strategy by the US Department of Defense; and
  2. Efficiency of the systems makes them both affordable and high-performance (ie, fast).
Core concerns for distributed virtual ID management When companies consider whether to use a cloud computing solution for their identity and access management needs, they should first be prepared. Ideally, you want your in-house identity management protocols to be strong and well-developed before you start to worry about the virtualized variety, says IT security and management consultant Philip Cox. Additionally, you need to keep two basic concerns in mind:
  • the forms of validation that meet your requirements; and
  • the sources of validation that you consider trustworthy (external versus internal).
Forms of validation Identity is validated in two basic ways — organizationally and personally. The difference between these two forms of validation are as follows:
  • Organizational – Validation of an identity that occurs within a business, so that the identity is verified by the organization. Trust is strong for this validation method: the identity of users is confirmed by a larger entity.
  • Personal – Validation that is purely based on an individual’s claim to a given identity. Unlike when an organization vouches for validity of an ID, validation of this sort doesn’t have any checks or balances. For instance, any person could create a Gmail account with an assumed name.
Just because organizational validation is more trustworthy does not mean that it is always necessary. Businesses can utilize the organizational method for business accounts. Consumer accounts can rely on personal validation, both for ease-of-use and because the security risk of false identity is lower. Sources of validation You don’t necessarily need to validate identity yourself in every case, unless your company must perform validation strictly for regulatory compliance reasons. Assuming your business does have leeway to accept outside validation sources, you need to decide what entities you will trust with identity validation. Acceptance or denial of external validation channels indicates the type of identity being used within your system:
  • Federated identity – This type of identity utilizes the checks and balances of the various firms that make up an organizational collective, which requires trust in the other member entities that make up the group; and
  • Local identity – This type of identity requires more direct management, but trust is not required in external organizations because all of the validation occurs in-house.
Advantages of IDaaS According to David Linthicum of InfoWorld , there are three primary benefits to creating one central hub for ID authentication, using a distributed virtual model:
  1. validation of identity for environments both internal and external to the organization;
  2. creation of one location for vulnerability, so that security concerns can be monitored and addressed from a systemic headquarters rather than a number of branches; and
  3. reduction in cost by utilizing an aggregate strategy rather than one with numerous disparate components.
Several technologies involved with IAM Gregg Kreizman, an analyst for Gartner, recommends that anyone assessing a distributed virtual solution to manage business identities should educate themselves on four technologies that are often involved:
  1. When considering authentication, Security Assertion Markup Language (SAML) is a wise choice for federation purposes. However, OpenID Connect, which utilizes RESTful APIs to filter identities, is also gaining traction.
  2. For retrieval of data in distributed virtual atmospheres, OAuth is a tool that has been adopted by Internet giants such as Twitter and Facebook.
  3. A more recent innovation is Simple Cloud Identity Management (SCIM), an attempt to make validation for access to distributed virtual systems simpler and more affordable.
  4. A final option that is a bit more sophisticated, for better and worse, is U-MA (User Managed Access), which designates a quartet of parties – an authorizing user, a requester, a host, and an authorization manager – in order to store user authorizations so that accessing various systems is more streamlined.
IDaaS is a major trend in cloud computing for 2014. Use of distributed virtualization strategies to manage identity has become more commonplace as security has become less of a concern – as evidenced by the Pentagon’s adoption of the IT model. Companies can use both organizational and personal authentication methods with their users dependent on the scenario, validating through federation or local means as necessary.