EEPlat MetaData PaaS Whitepaper
- 格式:doc
- 大小:375.50 KB
- 文档页数:14
龙源期刊网
IDC与东软联合发布业务基础平台白皮书
作者:
来源:《中国电子报》2015年第46期
本报讯近日,东软UniEAP&SaCa全国巡展启动仪式在北京举行。
会上,东软集团与IDC 联合发布《业务基础平台助力企业数字化转型》白皮书,其指出在全球经济向数字经济转型的大时代背景下,企业必须转变原有的IT建设思路,要基于业务基础平台统一技术路线,实现快速开发,提升IT自主掌控能力,进而驱动业务创新和互联网化转型。
IDC认为,未来,所有企业都是技术驱动型的公司,IT在企业日常经营活动中不再扮演辅助角色,更多是主导作用,IT会越来越多地融入到产品与服务中去,而不仅仅是支撑企业的
管理和运营,因此企业自有IT建设能力将会逐步加强。
白皮书指出,面向未来更为复杂、多样的IT建设诉求,业务基础平台需要具备六项核心能力,包括软件全生命周期支撑能力、平台开放定制能力、适应前沿技术发展趋势能力、软件业务资产沉淀能力等,只有如此,才能够支持行业客户更为便捷迅速的构建业务系统,从容应对市场变革。
(晓文)。
TECHNICAL WHITE PAPERExploring the Depth of Simplicity: Protecting Microsoft SQL Server with RubrikReleased March 2017 TABLE OF CONTENTSAUDIENCE (3)EXECUTIVE SUMMARY (3)COMMON CUSTOMER CHALLENGES (3)OVERVIEWS (4)Capability Overview (4)Setup Overview (4)SLA Domain Policy Overview (5)Configuration & Restore Overview (7)DEEP DIVE (10)Incremental Forever Snapshots (10)Granular Database Protection (10)Point-In-Time Restores (11)Restore User - Role Based Access Control (11)Simple Configuration (12)Transparent Connector Upgrades (12)Auto-Discovery (12)Centralized Management (12)Encrypted Database Support (13)Flash Optimized Parallel Ingest (13)Flexibility to protect SQL at a VMware level (13)SQL DATABASE RECOVERY OPTIONS (13)ADVANCED SQL FUNCTIONALITY SUPPORT (13)Always On (13)Windows Server Failover Clustering (WSFC) with SQL Server (13)Cross Version Restore (14)Recovery to a Specific LSN via the API (14)UNDER THE HOOD (15)SQL Backup Flow (15)SQL Restore Flow (15)CONCLUSION (15)AUDIENCEThis white paper is intended for backup administrators and DBA’s to provide a deep dive around the implementation and benefits of Rubrik’s support for Microsoft SQL Server backups.Note: for the remainder of this white paper, we will simply refer to “SQL Server” to be concise rather than “Microsoft SQL Server”, “SQL”, or “MS SQL”.EXECUTIVE SUMMARYMany organizations use Microsoft SQL Server for their critical applications. Throughout the years SQL Server has steadily improved to become a critical component of modern datacenters. Despite these improvements, SQL Server backups have often been a question of trade-offs between cost, efficiency, and simplicity.Rubrik’s SQL Server backup functionality builds on top of Rubrik’s policy-driven architecture extending it to SQL backups. Although SQL Server environments can be complex, Rubrik’s support for SQL Server backups aligns with the core architectural and operational simplicity of the Rubrik platform.For a walkthrough of Rubrik architecture, see the “Technology Overview & How It Works” white paper.COMMON CUSTOMER CHALLENGESIn creating Rubrik’s SQL support, we discussed with our customers what challenges they were seeing and incorporated those into our design goals. We heard...OVERVIEWSCAPABILITY OVERVIEWAligning with the challenges we heard from customers, the key benefits of our approach are:• Auto-discovery of all instances, databases, and clusters on each SQL Server lowering operational overhead during configuration• Centralized management via visibility in the Rubrik UI of all databases both being backed up as well as excluded from backup• “Incremental-forever” backups via block mapping and intelligent transaction log handling to dramatically reduce local storage requirements, provide much faster backups, and reduce networkusage• Granular database protection via Rubrik SLA policies - ability to have different policies at the server and instance level as well per database on the same SQL Server or instance• Seamless point-in-time restore via a very simple, efficient interface providing full-backup restores + transaction log replay via a single operation• Log truncation and log management providing further operational time savings• Copy-only mode for seamless transition or coexistence with existing backup product• Full application awareness in high availability deployments like Always On Availability Groups and Windows Failover Cluster.We’ll explore these and more in the feature Deep Dive below.SETUP OVERVIEWRubrik supports all versions of SQL supported by Microsoft via Extended Support - Windows Server 2008 R2 and newer with SQL Server 2008 and newer. Please see the Rubrik Compatibility Matrix for supported version details. Per Microsoft, Extended Support for SQL Server 2005 ended in April 2016.To provide integrated SQL functionality, Rubrik uses a lightweight SQL Connector which uses minimal CPU and Storage. The connector encrypts all backup traffic. As an MSI, it can be installed manually or easily deployed via standard provisioning tools. The connector does not require changing existing maintenance plans or rebuilding them from scratch.To reduce operational overhead, connector upgrades after installation are automatic and completely transparent to the user.Once the connector is installed, SQL databases are automatically discovered. The user can assign policies to the individual databases – these policies are core to Rubrik and can be leveraged across multiple data types.SLA DOMAIN POLICY OVERVIEWA Rubrik SLA Domain Policy is a declarative policy encompassing the core items needed for backup and recovery replacing the need to individually configure jobs, tasks, and other items. SLA Domain Policies are a core part of Rubrik’s architecture which extend across all data types as shown here.Figure 1: Rubrik Policy OverviewLet’s walk through the pieces needed to configure an SLA Domain Policy that can apply to all data types - we’ll look at SQL specific items in the next section.1. Backup Frequency: this is also known as Recovery Point Objective (RPO). Simply put, how often arebackups are taken?• For databases, this determines how often a database restore point is synthesized from incrementally transmitted block maps. For databases in Full Recovery mode, RPO is also impacted by the frequency of transaction log backups.2. Availability Duration: this is also known as retention. Simply put, how long are backups retained?• For databases, retention may often be shorter than other data types unless needed for regulatory or compliance reasons.Figure 2: Rubrik Policy Components - Frequency and Duration3. Archival Policy: this is also known Recovery Time Objective (RTO) or “When and Where to Archive”.Archive targets can be public cloud (AWS or Azure) or private cloud (S3 compatible object stores or NFS). This dictates which cloud target is used for archive and when archives are maintained solely in the cloud and not on the local Rubrik cluster. If archives are maintained solely in the cloud (past 30 days for instance), RTO is longer due to the time required to retrieve back to the Rubrik cluster.• For databases, long-term archive required for regulatory or compliance reasons can be stored in a cloud archive.Figure 3: Rubrik Policy Components - Archival4. Replication Policy: this relates to Disaster Recovery. Simply put, how much replicated data should bemaintained at a DR site?• For databases, this will often be a shorter value. In a Disaster Recovery situation, recovery of the most recent state of a database is most common. This policy section allows cost savings by storing a recent subset of data at a DR site.Figure 4: Rubrik Policy Components - ReplicationThe policy architecture is intentionally straightforward to configure yet powerful - the screenshots above illustrate the concepts well. Please see Rubrik documentation and the core architecture white paper for a more thorough walkthrough of SLA Domain details.CONFIGURATION & RESTORE OVERVIEWAs noted above, SLA Domain Policies can be applied at the SQL server, instance, database, or cluster level.A visual walkthrough illustrates the concepts well along with illustrating the notably few SQL specific configuration options.A list of auto-discovered SQL inventory at the server and database level - instance level is also available.Figure 5: SQL Inventory - Server LevelFigure 6: SQL Inventory - Database LevelAfter selecting a server, instance, or database, clicking the “Manage Protection” button brings up the policy assignment screen where can assign a policy as well as set options around copy only backups, log backup frequency, and log backup retention.Figure 7: SQL Policy ManagementFigure 8: SQL Policy ManagementOnce protected, the restore process is a simple slider for point-in-time restore.Figure 9: Point in Time Restore ControlDEEP DIVELet’s explore in depth at a feature by feature level the items discussed briefly in the overview above. Many of the features explored below are intentionally transparent during day to day operations. INCREMENTAL FOREVER SNAPSHOTSIncremental Forever snapshots dramatically reduce storage usage as well as network traffic both inside the datacenter as well as over replication links. Although SQL Server does not natively support incremental-forever backups, Rubrik can provide this capability via block mapping.Databases using Full recovery model can be protected through policy driven snapshots and backups of the transaction log, or through policy driven snapshots only. The Rubrik cluster performs an initial full database backup followed by periodic block mapping to detect and transmit changed data based on the assigned policy. Additionally, there are frequent interim backups of the transaction log with ability to specify the default frequency transaction log backups as well as retention of transaction log backups.Figure 10: SQL Specific Policy OptionsThe combination of database snapshots and transaction log backups permits granular restore of a database to a specified recovery point. See the “Advanced SQL Functionality” section below for details on the “Recovery to a Specific LSN via the API” feature. During backups, transaction logs can be either truncated or left untouched via a “Copy Only” mode - a checkbox option available during SLA assignment.For databases using Simple recovery model, the Rubrik cluster performs policy driven snapshots of the database similar to the approach outlined above.GRANULAR DATABASE PROTECTIONProtection policies can be assigned at the Windows host level, an entire SQL instance, any individual database, and even multiple overlapping levels - the most granular assignment has priority. Derived assignment provides a way to uniformly manage and protect those databases however only applies to the databases that existat the time of the assignment. This provides flexibility on SQL servers hosting many databases for different purposes - some of which may have more stringent RPO and RTO requirements than others.Rubrik can easily assign unique policies to individual databases.See the “SLA Domains” column in the screenshot below as an example.Figure 11: SLA DomainsPOINT-IN-TIME RESTORESSQL Databases often support critical workloads which require an RPO in minutes. Rubrik achieves this by backing up the transaction logs in addition to the databases. During the restore process, a user specifies a desired restore time simply by choosing a day and dragging a slider to the desired time. Alternately, a specific time can be typed in. The system then performs the following steps:• Restores the full snapshot closest to the user specified time• Replays and applies transactions starting from the point of snapshot to the time specified by the user.This is often known as “rolling transaction logs”.While the amount of time needed to restore is dependent on multiple variables (network speed, primary storage, and more), the time for replaying transaction logs can be reduced by specifying more full snapshots - something configurable via Rubrik policy.Figure 12: Point in Time Restore ControlsFigure 13: Role Based Access Control RESTORE USER - ROLE BASED ACCESS CONTROLRestores can be performed either by a Rubrik administrator or viathe “End User” role - a role that is assigned granular per objectpermissions on creation to perform restores. Users with this role canspecifically be granted permission to perform in-place “DestructiveRestores” which will overwrite existing data. Alternatively, the EndUser role can Export to a new location as described below.SIMPLE CONFIGURATIONThere is no job configuration and no requirement for a staging server - simply the Rubrik cluster and SQL servers with the connector installed.TRANSPARENT CONNECTOR UPGRADESIn larger environments, upgrading backup agents with new versions can be time-consuming and often delay backup environment upgrades due to time required for agent updates.Rubrik’s ability to automatically and transparently upgrade its SQL connector removes the time needed for manual agent updates. We have done this via an inner and outer core design - the outer core detects software upgrades and upgrades the inner core connector. The connector upgrade does not require a restart of SQL Server or the underlying Windows host.AUTO-DISCOVERYRubrik auto-discovers all instances and databases on each SQL server where the connector is installed. Multiple views are then provided which are easily sortable and searchable - Hosts/Instances, All DBs, and Failover Clusters.Figure 14: Auto-Discovery of all databasesCENTRALIZED MANAGEMENTAlthough an overused term, Rubrik does provide a “single pane of glass” for all supported backup workloads. For SQL specifically, Rubrik customers can see all backed up SQL servers, instances, and databases in a single interface. The same UI and the same policy engine is used whether for SQL, VMware, Linux, or Windows.This does not preclude the ability for DBA’s to verify backup success from SQL Server Management Studio via querying the “backup set” table in the “msdb” database which records all successful backups. See this link on MSDN for further details. As well, a full walkthrough of this process is available as a KnowledgeBase article in the Rubrik Support Portal.ENCRYPTED DATABASE SUPPORTRubrik will backup encrypted databases and fingerprint-based compression will also work on encrypted databases. For restore, the workflows are the same with one additional step - users must manage keys manually. Steps required to move keys are detailed in this Microsoft article. Once this is done, the intended database can be exported from the Rubrik UI.FLASH OPTIMIZED PARALLEL INGESTAlthough a core Rubrik capability, flash optimized parallel ingest is particularly relevant for large database backups. Backups are taken in parallel across multiple nodes due to Rubrik’s distributed job scheduling and land on flash before destaging to disk. This removes bottlenecks during large initial backups and allows faster protection.FLEXIBILITY TO PROTECT SQL AT A VMWARE LEVELIn a virtualized environment using Simple Recovery Mode, Rubrik’s enhanced VMware backup capabilities may be sufficient with even lower operational overhead. While this does not include many of the specific benefits listed in this paper, it does provide backup consistency via Rubrik’s VSS implementation, Instant Recovery via Live Mount, and Object Level Recovery using Kroll. For databases in simple recovery mode, this may meetor exceed business requirements. This approach notably does not provide Point In Time Restore, Granular Database Protection, and Log Management.SQL DATABASE RECOVERY OPTIONSThere are two recovery options for SQL Databases - Restore and Export.Choosing Restore drops the original database and creates a new database on the same instance with the same name and file structure. A common use case for this option is corruption of the original database where a restore to a previous “point in time” is desired.Choosing Export creates a new database. If restoring to the same SQL instance, a different database nameis used with file structure reflecting that name. If to a different SQL instance, the same or different database name can be used. A common use case for this option is use a production database snapshot as the source to spin up a dev/test clone.Given SQL Server’s capability for a database to have multiple data and log files, customers can now specify at an individual file level the restore location during an Export operation.ADVANCED SQL FUNCTIONALITY SUPPORTALWAYS ONAlways On is a high availability solution provided by SQL Server. Additional details are available from Microsoft in Overview of Always On Availability Groups (SQL Server).Rubrik will automatically detect if databases are part of an Availability Group and not “double backup” the same database located on multiple servers in the same Availability Group. In the case of Always On failover, simply switch protection to the new database with no loss of history and no need for a new full backup.WINDOWS SERVER FAILOVER CLUSTERING (WSFC) WITH SQL SERVERWindows Server Failover Clustering (WSFC) with SQL Server is a popular option for providing SQL high availability due to its lower storage and SQL licensing requirements. Please see this Microsoft article for more details.Rubrik supports WSFC with SQL Server for backups. After installing connectors on Node A & Node B, Rubrik will recognize an entity called “Failover Cluster” with its underlying instances and database. Use the same methods in the Rubrik UI for SLA assignment as any other SQL databases. In case of failover, no manual steps are required - Rubrik will automatically protect the database on Node B with no new full backup required.Figure 15: Failover ClustersCROSS VERSION RESTORESQL DBA’s often need to restore SQL backups to different versions of SQL Server whether for testing or restoring old backups where only newer SQL Server versions remain in the customer environment. Rubrik supports restoring databases to the same or newer SQL Server version. For precise details, please consult the Rubrik Compatibility Matrix for a current list of supported Source and Target SQL Server versions. RECOVERY TO A SPECIFIC LSN VIA THE APIRubrik provides the SQL DBA the ability to recover to a specific Log Sequence Number (LSN) via the Rubrik API for full recovery model databases. This is especially useful when trying to recover to a specific transaction or set of transactions. A LSN recovery is different than a standard Transaction Log recovery as it allows you to recover to a specific state that is anywhere within the Transaction Log instead of the end of the Transaction Log.{“recoveryPoint”: {“lsnPoint”: {“lsn”: “LSN Number”,}UNDER THE HOODSQL BACKUP FLOW1. The Rubrik cluster initiates a snapshot based on SLA policy. For security reasons, all backups areinitiated by the Rubrik cluster rather than any client or connector.2. The connector takes a snapshot of the SQL Server database.3. The Rubrik cluster sends the previous snapshot metadata (if any) to the connector.4. The connector scans the current snapshot and compares metadata to the previous snapshot’smetadata.5. For specific data blocks where metadata doesn’t match, the connector sends the data to Rubrik.6. After all the database files are scanned, the connector deletes the snapshot.SQL RESTORE FLOW1. Rubrik finds the snapshot and log backups needed to recover to a certain point in time as specified bythe user.2. The connector verifies file permissions via a temporary database.3. Rubrik transfers row files (mdf) and log files (ldf) to their destinations.4. The connector uses a native SQL restore method to restore the database.5. Rubrik transfers log backups to a temporary directory.6. The connector issues SQL commands to replay log backups to the requested point in time.7. After replay of logs is complete, the connector deletes log backups from the temporary directory.8. The connector opens the database to be externally accessible. Restore/Export is done.For further details, please see the “SQL Server Databases” chapter of the Rubrik User Guide or reach out to your Rubrik sales team.CONCLUSIONWhile robust and full-featured, Rubrik’s support for SQL backups extends the Rubrik focus on simplicity via understanding customers’ true operational requirements - who says backups can’t be fun?299 South California Ave. #250 Palo Alto, CA 94306 ***************|。
How SOAR Is Transforming Threat IntelligenceThe benefits of digita l tra nsforma tion for a ny enterprise a re clea r, but thetra nsforma tion a lso comes with security implica tions a s new technologiesexpand the attack surface, enabling attackers to come from anywhere. Withcloud computing, a utoma tion, a nd a rtificia l intelligence now ma instrea m,a tta ckers ca n ca rry out their ca mpa igns a t unprecedented levels of sophis-tication and scale with minimal human intervention. Today, threat actors attackcomputers every 39 seconds.1 A report by Cybersecurity Ventures projects thatby 2021, a business will fall victim to ransomware every 11 seconds.2 This is onlypossible because attackers are taking advantage of machine speed.1. “Hackers Attack Every 39 Seconds,” Security Magazine, February 10, 2017, https:///articles/87787-hackers-attack-every-39-seconds.2. “Global Cybercrime Damages Predicted T o Reach $6 Trillion Annually By 2021,” Cybercrime Magazine, December 7, 2018, https:///cybercrime-damages-6-trillion-by-2021.Incident Responders Incident responders are worried about damage control. They look for possible breaches, and If they find evidence of one, their job is to investigate and prevent it from spreading. Due to the sensitive nature of breaches, all evidence needs to be well-documented and shared with all the stakeholders. Incident responders have access to tools that help them contain breach -es, such as EDR tools to kill the end host. They engage firewall administrators to deploy policies that block propagation at the network level. They heavily rely on external threat intelligence to learn about the profiles and common techniques of attackers, allowing them to respond confidently and with precision.Accordingly, incident responders are challenged when it comes to:• Knowledge transfer , where a lack of collaboration be-tween teams introduces gaps in security.• Case management , because generic case management is not ideal for security use cases, resulting in inefficiency and poor documentation.• Lack of threat intelligence as many incident responders are forced to use manual, flawed processes to gain context around external threats, causing delays and risks.Incident responders need:• Full security case management to document their find -ings in detail and enable them to collaborate in real time with other stakeholders as well as provide preventive and quarantine measures across the enterprise.•Threat intelligence to provide deeper context around attackers and their motivations.Disparate knowledge transfer introduces communication gaps Insufficient case management Lack of security centriccase-management results in inefficiency Lack of threat intelligence Missing real-world external context to prioritize alertsFigure 2:Incident responders challengesToo many alerts & not enough people to handle them Repetitive & manual actions across tools,process and people It takes days toinvestigate and respond to threatsFigure 1: SOC analyst challengesdetect, investigate, and respond to advanced cyberthreats.Unfortunately, security teams of all sizes are overwhelmed and unable to function at full capacity due to a shortage of cybersecurity skills, high volumes of low-fidelity alerts, a plethora of disconnected security tools, and lack of external threat context. To address these challenges, we first need to break down the inner workings of a SOC and get a sense of what happens. Only then will we be able to appreciate the enormity of what SOC teams are up against and begin to ap-ply solutions that work.In larger organizations, mature security operations teams have a lot of moving parts. Three main functions make up ef -fective security operations: SOC analysts, incident respond -ers, and threat analysts.SOC Analysts SOC analysts look at thousands of internal alerts daily, sourced by security information and event management (SIEM) technologies, endpoint detection and response (EDR) systems, and sometimes hundreds of other internal security tools. Their job is to be the eyes of the enterprise: to detect, investigate, determine root cause of, and respond quickly to security incidents. SOC analysts continuously monitor the network using detection tools, identifying and investigating potential threats. Once they identify a potential risk, ana-lysts also need to document their findings and share recom -mended actions with other stakeholders.All this means SOC analysts often struggle with:• Alert fatigue: The average enterprise receives more than 11,000 security alerts per day,3 and doesn’t have enough people to handle them.• Lack of time: Repetitive, manual, and administrative tasks take too long. A lack of integration across the many tools analysts must use slows down every stage of the process. • Limited context: It often takes days to investigate and respond to threats. Security tools don’t provide adequate context on alerts or their relevance to the environment,forcing analysts to piece these things together manually.To overcome these problems, analysts need:• Automation to take care of daily tasks so they can f ocus on what really matters.• Real-time collaboration with the rest of the team so they are always in sync and learning from one another. • Threat intelligence that delivers context to help them un -derstand the relevance and potential impact of a threat.3.According to a commissioned study conducted by Forrester Consulting on behalf of Palo Alto Networks, February 2020. As of the publication of this document, the report has not yet been officially released.IP 1.1.1.1feeds How bad is it?Malware analysisEndpoint detection and response SIEM Spreadsheet Bad IP 1.1.1.1Security SIEM ToolsResearch ReportsWho is behind it?EmailEnd usersAssets FirewallNetwork topology Internet accessNews/Blog Threat actors use 1.1.1.1 To attack!Industry peersAre we impacted?Who is using this IP address?Which policy isblocking this IP address?• Difficulty taking action since putting threat intel into a ction is highly manual and relies on other teams.Threat intelligence analysts need:• Full control over threat intelligence feed indicators to build their own logic and reputation based on their environment and business needs.• Collaboration with other teams to quickly arm them with rich context and up-to-date research.• Robust documentationto capture their ck of controlManually tuningand scoring of IOCS Siloed workflows Incidents and threat intel are broken across tools, people and process Putting threat intel intoaction is highly manual and repetitiveFigure 3: Difficulties facing threat analysts Figure 4: Holistic view of a typical day in a SOCThreat Intelligence Analysts/ Programs Threat analysts identify potential risks to organizations that have not been observed in the network. They provide c ontext around potential threats by combining external threat i ntelligence feeds from multiple sources with human intelli-gence. According to a recent survey conducted by the SANS Institute, 49.5% of organizations have some type of a threat intelligence team or program with its own dedicated budget and staffing.4 This is evidence of the growing importance of threat intelligence analysts, who help to identify attackers, uncovering their motivations, techniques, and processes. Threat intelligence teams pass their findings on specific at -tacks as well as broader threat landscape reports to SOC and incident response teams to build better preventive measures.Threat intelligence analysts face:• Lack of control over threat intelligence feeds, forcing the analysts to manually tune and score indicators of compro-mise (IOCs) to match their environment.• Siloed workflows causing poor communication and i ntegration between incident response and threat intelli-gence tools, teams, and processes.4. “2020 SANS Cyber Threat Intelligence (CTI) Survey,” SANS Institute, February 11, 2020, https:///reading-room/whitepapers/threats/paper/39395.with instant clarity into high-priority threats to drive the right response, in the right way, across the entire enterprise.Cortex XSOAR unifies case management, r eal-time c in the industry’s first extended security mation, and response platform.security orchestration, automation, and response (SOAR) platforms to manage alerts across all sources, standardize processes with playbooks, and automate response for any security use case, but there is still a significant gap when it comes to threat intelligence management.as an i ssue, o ffering guidance that SOAR and TIPs need to converge. TIPs are merely adding complexity by aggregating intelligence sources without the real-world context or auto -mation required to take quick, confident action. It’s time for a different approach.We Need an Extended SOAR PlatformCortex™ XSOAR, with native Threat I ntel Management, just makes sense. As part of the extensible C ortex XSOAR platform,Threat Intel Management defines a new approach by unify -ing threat intelligence aggregation, scoring, and sharing with playbook-driven automation. It empowers securityleaders370+Third-party tools Cortex XDR Tools People APIOther sources source ISAC Premium AutoFocus AFresponseTake complete control of your threat intelligence feeds Make smarter incident response decisions by enriching every tool and process Close the loop between intelligence and action with playbook-driven automation Figure 7: Benefits of Threat Intel Management Figure 6: Cortex XSOAR playbook-driven automationintelligence platforms Figure 5: A typical SOAR + TIP siloed deployment Cortex XSOAR allows you to:• Eliminate manual tasks with automated playbooks to ag-gregate, parse, deduplicate, and manage millions of daily indicators across multiple feed sources. Extend and edit IOC scoring with ease. Find providers that have the most relevant indicators for your specific environment.• Reveal critical threats by layering third-party threati ntelligence with internal incidents to prioritize alerts and make smarter response decisions. Supercharge i nvestigations with high-fidelity, built-in threat intel -ligence from Palo Alto N etworks AutoFocus™ service. Enrich any detection, monitoring, or response tool withcontext from curated threat intelligence.Security analysts deal with millions of indicators collected from hundreds of multi-sourced intelligence feeds. These indicators lack context required for analysts to make informed decisions, p recision. tools at their disposal can’t handle the sheer volume of indi cators, and analysts end up re-prioritizing indicators to match native Threat Management gives analysts complete control and flexibility to incorporate any business logic into their scoring. Built-in inte-gration with more than 370 vendors allows analysts to react in real time as the indicators are consumed.report millions of indicators on a daily basis the context analysts need to make informed decisions and take action handle limited amounts of threat intelligence data and need to constantlyre-prioritize indicators Figure 9: Challenges of disconnected intelligence toolsFigure 10: Intelligence management before and after Cortex XSOAR ISACs AAPIAutomated playbooksIndicatorvalue report Indicator lifecycle Third-party Intel sharingvisualization Threat intelligence enrichment and prioritization3000 Tannery Way Santa Clara, CA 95054M a in: +1.408.753.4000S a les: +1.866.320.4788Support: +1.866.898.9087© 2020 Palo Alto Networks, Inc. Palo Alto Networks is a registered t rademark of Palo Alto Networks. A list of our trademarks can be found at https:///company/trademarks.html. All other marks mentioned herein may be trademarks of their respective companies. how-cortex-is-transforming-threat-intelligence-wp-041720of the most common use cases include phishing, s ecurity op-erations, incident alert handling, cloud security orchestration, vulnerability management, and threat hunting.The future of SOAR includes native Threat Intel Manage -ment, enabling teams to break down silos between securi-ty operations and threat intelligence functions. When these are offered together in one platform, SOC analysts, incident responders, and threat intelligence teams can unify their efforts against advanced adversaries, optimizing their com -munication, efficiency, and access to insights.Cortex XSOAR redefines orchestration, automation, and response with the industry’s firstextended SOARplatform that includes automation, orchestration, real-time collab-oration, case management, and Threat Intel Management, enabling security teams to keep pace with attackers now and in the future.Visit us online to learn more about Cortex XSOAR.Breadth of Cortex XSOAR Use Cases The open and extensible Cortex XSOAR platform can be applied to a wide range of use cases—even to processes outside the purview of the SOC or security incident response team. SomeIncorporate any business logic into collection, scoring,and integrations to security devices React in real time to new indicators as they are consumed Defend your network instead of spending time building integrations integrations Figure 11: Cortex XSOAR benefits across the SOC。
Bytom比原链一个多元比特资产交互协议摘要Bytom Blockchain Protocol(简称比原链:Bytom)是一种多元比特资产的交互协议,运行在比原链区块链上的不同形态的、异构的比特资产(原生的数字货币、数字资产)和原子资产(有传统物理世界对应物的权证、权益、股息、债券、情报资讯、预测信息等)可以通过该协议进行登记、交换、对赌、和基于合约的更具复杂性的交互操作。
连通原子世界与比特世界,促进资产在两个世界间的交互和流转。
比原链采用三层架构:应用层、合约层、数据层,应用层对移动终端等多终端友好,方便开发者便捷的开发出资产管理应用;合约层采用创世合约和控制合约进行资产的发行和管理,在底层支持扩展的UTXO模型BUTXO,对虚拟机做了优化,采用自省机制以防止图灵完备中的死锁状态;数据层使用分布式账本技术,实现资产的发行、花费、交换等操作,共识机制采用对人工智能ASIC芯片友好型POW算法,在哈希过程中引入矩阵和卷积计算,使得矿机在闲置或被淘汰后,可用于AI硬件加速服务,从而产生额外的社会效益。
1 比原链的使命、目标与创新1.1问题总览总的来说,信息革命极大的改变了我们生活的世界,纯粹原子性构造世界的主宰地位正受到挑战,在大数据奇点临近和大规模计算能力提升的时代背景下,互联网正面临从“信息即权力”到“计算即权力”的过渡阶段,而世界经济结构与权力迁移更多的由比特信息构成。
包含“负熵”信息流、比特流成为个人及企业、机构赖以生存运作的一部分。
演进的路线逐步从早期的:“比特工具”时代:比特作为一种辅助性提高效率的产物,例如:excel表格、email邮箱;发展到后续的 “比特货币”时代:比特形式存在、没有物理载体及介质对应的价值符号例如:比特币、以太币以及各种公有链、联盟链代币;再到更加广泛多元的“比特资产”时代:一切有价值,可交换的原子资产,例如现实经济的收益权、股权、债权、证券化资产等, 都可跃迁到不可篡改、可追溯、信息对称的区块链分布式账本上,通过可编程智能合约与金融、博彩、保险等预测市场产生交互。
kundb 白皮书在当今大数据时代,企业面临着海量数据的存储、管理和分析挑战。
传统的关系型数据库已经无法满足现代企业对高可扩展性、高可用性和高性能的需求。
kundb作为一款新兴的分布式数据库,应运而生,为企业提供了一种高效、可靠、灵活的数据存储和管理解决方案。
kundb采用了先进的分布式架构设计,支持水平扩展和弹性伸缩。
通过将数据分片存储在多个节点上,kundb可以轻松应对不断增长的数据量和并发访问压力。
当业务需求增加时,只需添加新的节点即可实现系统容量的线性扩展,无需停机维护或数据迁移,极大地提高了系统的可扩展性和灵活性。
高可用性是kundb的另一个亮点。
通过复制和故障自动切换机制,kundb确保了数据的安全性和服务的连续性。
每个数据分片都会在多个节点上保存多个副本,当某个节点发生故障时,系统会自动将请求路由到其他可用节点,保证业务的不间断运行。
这种自动化的容错机制大大提高了系统的可靠性,减少了人工介入的需求。
kundb采用了列式存储和内存计算技术,为用户提供了出色的查询性能。
列式存储允许kundb只读取查询所需的列,避免了不必要的I/O操作,显著提高了查询速度。
同时,kundb利用内存进行数据的缓存和计算,最大限度地减少了磁盘访问,进一步提升了查询性能。
这使得kundb成为实时分析、即席查询等场景的理想选择。
为了满足不同业务场景的需求,kundb提供了灵活的数据模型和丰富的数据类型。
无论是结构化数据、半结构化数据还是非结构化数据,kundb都能够高效地存储和处理。
kundb支持关系型数据模型和文档型数据模型,用户可以根据实际需求选择适合的数据模型。
此外,kundb还提供了全文搜索、地理位置等高级功能,进一步扩展了其应用范围。
kundb重视数据安全和隐私保护。
通过细粒度的访问控制和数据加密技术,kundb确保了数据的机密性和完整性。
用户可以根据不同的角色和权限设置数据访问策略,防止未经授权的访问和篡改。
技术白皮书金仓数据库管理系统1北京人大金仓信息技术股份有限公司Beijing BaseSoft Information Technologies Inc.金仓数据库管理系统技术白皮书库管理系统技术白皮书目 录1. 概述 (6)2. 产品构成 (6)3. 产品功能 (7)4. 通用性 (7)4.1 符合国际标准 (7)4.2 跨平台支持 (8)4.3 多语言支持 (8)4.4 海量数据存储和管理 (8)4.5 XML 支持 (9)4.6 全文检索引擎 (9)4.7 对 Web 应用的支持 (9)5. 高性能 (9)5.1 大规模并发处理能力 (9)5.2 有效的查询优化策略 (10)5.3 加强的缓冲机制 (10)5.4 服务器端线程池 (10)5.5 SMP 支持及64位计算 (11)5.6 数据分区 (11)6.高安全性 (11)6.1 数据安全权限管理 (11)6.2 数据安全访问控制 (11)6.3 数据安全存储 (12)6.4 数据安全传输 (13)7.高可靠性 (13)7.1 故障恢复机制 (13)7.2 双机热备技术 (13)7.3 自动备份管理 (14)7.4 集群技术 (14)8.与主流数据库的兼容性 (14)8.1 与 Oracle 的兼容 (14)8.2 与 DB2 的兼容 (14)4库管理系统技术白皮书15易使用SQL标准符合主流DBMS兼容易管理系统初始化工具集成易用的企业管理器安全版特性安全版、企业版特性安全版、企业版、标准版特性单机版在标准版基础上裁剪、定制库管理系统技术白皮书提供符合. NET 平台要求的.NET Data Provider。
提供符合 PHP 扩展规范的接口。
提供符合 Perl 扩展规范的接口。
提供兼容 Oracle OCI 的数据访问接口。
完善的应用开发支持支持Visual 、Eclipse、NetBeans、JBuilder、PowerBuilder、Delphi、C++ Builder 等流行的开发环境。
zcloud白皮书Zcloud白皮书绪论Zcloud是一个创新的云计算平台,旨在为企业提供安全、可靠、可扩展的云服务。
本白皮书概述了Zcloud平台的架构、功能和优势,旨在帮助企业了解Zcloud如何满足他们的云计算需求。
架构Zcloud平台由以下组件组成:基础设施即服务 (IaaS):提供计算、存储和网络资源。
平台即服务 (PaaS):提供开发人员工具和服务,例如数据库、应用服务器和数据分析。
软件即服务 (SaaS):提供预构建的业务应用程序,例如CRM、ERP和人力资源管理。
云管理平台 (CMP):提供统一的管理控制台,用于监控和管理云资源。
功能Zcloud平台提供广泛的功能,包括:自动扩展:根据需求自动调整资源以优化性能和成本。
高可用性:通过冗余和故障切换机制确保应用程序的高可用性。
安全与合规:遵循行业最佳实践和认证,如ISO 27001和SOC 2。
多租户:允许多个客户安全隔离地使用云资源。
弹性:支持按需扩展和缩小资源,以满足不断变化的工作负载需求。
优势Zcloud平台为企业提供以下优势:成本节约:通过弹性定价和按需付费模式,降低IT成本。
敏捷性:快速部署和管理应用程序,缩短上市时间。
可扩展性:根据业务需求轻松扩展资源,满足不断增长的需求。
安全与可靠性:通过先进的安全措施和冗余机制,确保数据和应用程序安全。
持续创新:通过定期更新和新功能的引入,确保平台始终处于领先地位。
用例Zcloud平台适用于广泛的用例,包括:应用程序托管:托管关键业务应用程序以提高性能和可靠性。
数据分析:利用大数据分析工具从海量数据中提取洞察力。
灾难恢复:建立灾难恢复计划以保护关键数据和应用程序。
云迁移:将现有应用程序和基础设施无缝迁移到云。
DevOps:自动化软件开发和部署流程以提高效率。
结论Zcloud白皮书概述了Zcloud平台的架构、功能和优势。
通过提供安全、可靠、可扩展的云服务,Zcloud旨在为企业提供满足其云计算需求的全面解决方案。
The Federated CMDB VisionA Joint White Paper from BMC, CA, Fujitsu, HP, IBM, and Microsoft Version 1.0, 25 Jan 2007AuthorsDale Clark, CAPratul Dublish, MicrosoftMark Johnson, IBMVincent Kowalski, BMCYannis Labrou, FujitsuStefan Negritoiu, MicrosoftWilliam Vambenepe, HPMarv Waschke, CAVan Wiles, BMCKlaus Wurster, HPCopyright Notice© Copyright BMC, CA, Fujitsu, HP, IBM, and Microsoft 2007. All Rights Reserved.THIS WHITEPAPER IS PROVIDED "AS IS," AND THE AUTHORS MAKE NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, NON-INFRINGEMENT, OR TITLE; THAT THE CONTENTS OF THIS WHITEPAPER ARE SUITABLE FOR ANY PURPOSE; NOR THAT THE IMPLEMENTATION OF SUCH CONTENTS WILL NOT INFRINGE ANY THIRD PARTY PATENTS, COPYRIGHTS, TRADEMARKS OR OTHER RIGHTS.THE AUTHORS WILL NOT BE LIABLE FOR ANY DIRECT, INDIRECT, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF OR RELATING TO ANY USE OR DISTRIBUTION OF THIS WHITEPAPER.The name and trademarks of the Authors may NOT be used in any manner, including advertising or publicity pertaining to this Whitepaper or its contents without specific, written prior permission. Title to copyright in this Whitepaper will at all times remain with the Authors.No other rights are granted by implication, estoppel or otherwise.AbstractThis whitepaper presents the vision for a CMDB that is federated from multiple management data repositories, providing the basis for a set of specifications that will standardize how such federation is enabled.StatusThis whitepaper is an initial draft release and is provided for review and evaluation only. It is being published to solicit feedback. A feedback agreement is required before the working group can accept feedback.At some future date, the contents may be published under another name or under several new specifications, as shall be agreed by the authors and their respective corporations at that time.Table of ContentsThe Federated CMDB Vision (1)Authors (1)Copyright Notice (2)Abstract (2)Status (2)Table of Contents (3)1. Introduction (4)1.1 Purpose (4)1.2 Problem (4)1.3 Solution (4)2. Requirements (6)2.1 Illustrative Use Case (6)2.2 General Requirements (6)3. Architecture (7)4. Services and Data Model (8)4.1 Federated CMDB Services (8)4.2 MDR Services (9)4.3 Data Model (10)5. Summary (10)5.1 Specification Phases (10)5.2 Expected Adoption (10)1.Introduction1.1PurposeITIL®-based Configuration Management Databases (CMDBs) are emerging as aprominent technology for Enterprise Management Software. The usefulness of these CMDBs is dependent on the quality, reliability and security of the data stored inthem. A CMDB often contains data about managed resources like computer systems and application software, process artifacts like incident and change records, and relationships among them. The CMDB serves as a point of integration between IT management processes (as in Figure 1 below). Data from multiple sources needs to be managed directly or by reference in commercial CMDBs. The purpose of this paper is to describe how such data from multiple sources will be federated into a CMDB. The concepts and terms set forth here will provide the basis for industry-wide specifications to be subsequently written and proposed as formal standards.Figure 1 - Role of a CMDB1.2ProblemIn practice, the goal of federating data is often not met because the variousmanagement data are scattered across repositories that are not well integrated or coordinated. There is no standard for providers of Management Data Repositories (MDRs) to plug their data into a federating scheme. This problem exists both for individual vendors trying to integrate with multiple CMDBs, and for customers who need to integrate data from multiple vendors’ MDRs.1.3SolutionThis paper describes the architecture for a CMDB that is created by federating multiple Management Data Repositories. The guiding principles for this architecture are as follows:1.Enable a variety of data consumers to access a federation of managementdata through a standard access interface2.Enable a variety of data providers to participate in a federation ofmanagement data through a standard provider interface3.Provide an approach for reconciling and combining different information aboutthe same resourcesThis architecture will express the mechanisms for federating the CMDB throughstandard interfaces without dictating the specific implementation. This will provide the basis for an industry-wide standard that will allow a large number ofmanagement data providers to integrate their data into a coherent, seamless CMDB. Consumers of management data will benefit from having a common view of resources and relationships between them, and the ability to use standard queries to retrieve information related to these resources.From a business standpoint, the existence of such a standard architecture will: •Drive down the cost of developing adaptors and translators•Encourage greater participation by MDR implementers•Improve the quality and integrity of CMDB data•Reduce the time and effort required to integrate diverse data sources into a common CMDB•Reduce the customer total cost of ownership of a CMDB by simplifying and centralizing management interfaces•Enable better IT process management according to industry best practices like COBIT® and ITIL2.Requirements2.1Illustrative Use CaseTo make the general problems described in Section 1.2 more concrete we describe abusiness scenario that is often encountered by IT management personnel as follows.A problem with one or more resources is causing user transactions to fail or haveunsatisfactory response times. The IT Service Desk has created incidents for the response time problems and assigned a subject matter expert to solve theseincidents as quickly as possible. The subject matter expert will use resource and relationship data stored in the CMDB, and other management products, to locate the source of the problem. The subject matter expert will determine if the problem resources have had any changes applied recently which may have caused the problem.IT personnel assigned to such an incident will need to quickly understand what is causing the problem with the application and the impact to business services. This information will help him or her determine how to resolve the service disruption, what priority should be placed on this problem and if there is a need to enlist the assistance of other subject matter experts to assist with the resolution.The tasks required to solve these incidents would typically be performed by a servicedesk analyst, subject matter expert(s), and/or an incident manager. Management components (or applications) involved in this scenario include:•Application (transactions) monitoring application•Resource monitoring application(s)•Event analysis application•Incident management application•Change management application•CMDBThe current state of the art in CMDB technology would require all these componentsto be integrated in proprietary solutions whose reusability would be rather limited. In the proposed architecture, core information about the applications and transactions (and their supporting IT infrastructure) is available through the Federated CMDB. Related configuration details, performance records, event logs, incidents, and change records are available through the standard queries that each application supports. This architecture provides the basis for an integrated approach to service management.2.2General RequirementsIn general, solutions to scenarios such as the one described in Section 2.1 sharesome common requirements. These requirements include:•Web service interfaces for management data providers to plug into a common CMDB•Web service interfaces for clients to access such data, including the reconciled common identity of resources, relationships between resources, and relatedinformation•Data aggregation, normalization, federation, and reconciliation•Secure, authenticated access to these services controlled by an administrative interfaceWe will consider these interfaces to be the primary focus of our standard architecture, while the implementation of the interfaces and the resulting reconciled resource models will depend on the particular implementation of a Federated CMDB.3.ArchitectureThe Federated CMDB architecture will allow IT organizations to create and configure a CMDB from MDRs with various scopes and arranged in various topologies. The external view of the architecture is embodied in Figure 2 below:Figure 2 – Federated CMDB High Level ArchitectureThe elements depicted in Figure 2 are:•Federated CMDB•Consumers who are clients of the data of the CMDB•Providers (MDRs) whose data are federated into the CMDB•Data Provider Interfaces for MDRs to plug their data into the CMDB•Data Consumer Interfaces for clients to access the data from the CMDBThe data consumer interfaces will expose data retrieval operations such as query and subscription. MDRs can plug into the Federated CMDB if they implement the data provider interfaces. These interfaces will allow data sources to register the resources they manage, details about the available data, and how the data can be queried.4.Services and Data ModelThe services to support the above architecture will include the following: –Federated CMDB Services, which provide forAdministration of the Federated CMDBResource federation, including the registration of resources by MDRsQuery of the reconciled resources and their relationships in the Federated CMDBSubscription and notification for both resource and metadata changes in the Federated CMDB–MDR Services, which provide forQuery of the resource records and relationships managed by the MDRSubscription and notification for resource changes in the MDRA Data Model will represent resource schema (including resource types,identifying properties, and other data properties)Figure 3 – Federated CMDB Overview4.1Federated CMDB ServicesThere are four Federated CMDB services. Two (federation and administration) are for providers, and two (query and subscription) are for consumers.•Administration of the Federated CMDBAn administration service provides a primary interface of Federated CMDB. All MDRs that participate in the Federated CMDB are registered with theadministration service. Each MDR is associated to schemas (such as resourcedefinition classes) and capabilities (such as support for a query dialect) that it supports. The administration services of the Federated CMDB allow managementof the data sources and types that are federated, and control of the content, integrity, and access to the resource data that is registered and federated.•Resource Federation and RegistrationA federation service provides the interface to add, modify, and delete resourcedefinitions. The Federated CMDB manages a directory of all registered resources (configuration items, relationships, and process artifacts). It is responsible for reconciling the different perspectives of a resource’s identity into a common“resource context”. It may also manage additional data properties provided by MDRs when resources are added to the federation service.•Resource QueryA resource query interface allows client applications to retrieve data from theFederated CMDB based on certain criteria. This includes an instance query to retrieve items based on identifying properties, and a relationship query toenumerate a set of items and relationships based on attributes of therelationships and endpoints.•Subscription and Notification1A subscription service provides applications the ability to register for data changeevents that could occur in the Federated CMDB. Applications needing to respondto data change (update, insert, delete) events will subscribe to that event and will be delivered notification when such an event occurs.4.2MDR ServicesAn MDR must register itself to the Federated CMDB administration service. It may also register its capabilities, schemas, resources and relationships. It may support a query service and a subscription service.•Resource QueryA query service may be used to retrieve extended information about one or moreregistered resources. This includes an instance query to retrieve items based on identifying properties, and a relationship query to enumerate a set of items and relationships based on attributes of the relationships and endpoints.•Subscription and Notification2An MDR may support subscription and notification of changes to registeredresources. This allows a client of the MDR subscription service to be notified of changes in the MDR data, similar to the Federated CMDB subscription service.1 Federated CMDB subscription & notification is not planned for the first phase of the CMDB Federation specifications.2 MDR subscription & notification is not planned for the first phase of the CMDB Federation specifications.4.3Data ModelA common data model is essential for successful federation of data from multiple MDRs. A fundamental Data Model that underlies the Federated CMDB must include agreed upon definitions of:•Fundamental Data Types (for example Configuration Items, Relationships, and Process Artifacts)•Meta Data (information about the structure and format of the data)•Resource Identity (a consistent mechanism for identifying resources using a resource type and identifying properties, and an associated qualified name or globally unique ID)5.Summary5.1Specification PhasesIn order to progress rapidly, the CMDB Federation specifications will be proposed in phases. The first phase will focus on data provider and data consumer interfaces, allowing many management vendors to participate in federation of data, and allowing consumers to query the consolidated resource information from the Federated CMDB and its registered MDRs.The interfaces in the first phase are primarily intended to support non-federated queries of a single repository, though implementations may support federated queries with these interfaces. Later versions of the specification may define extensions to these interfaces that improve the capabilities and/or performance of federated queries.Subsequent phases of the CMDB Federation standard may define interfaces for:•Subscription and notification interfaces for the Federated CMDB and MDR services•More granular security mechanisms•Storing information that associates resources with services that can be launched in the context of the resource.5.2Expected AdoptionThe CMDB Federation specifications will allow management data providers to write a single integration interface that can be used with any compliant Federated CMDB. Adoption and standardization of these specifications will reduce the cost of ITIL process management and accelerate the adoption of ITIL practices. This will result in improved IT services and customer satisfaction, as the promise of ITIL is realized.。
EEPlat MetaData-Driven Multi-Tenant Architecture Contents 1 Introduction ................................................................................................................................... 3 2 MultiTenat Data Storage ................................................................................................................ 4 3 MetaModel System ........................................................................................................................ 6 3.1 Background Processing ....................................................................................................... 7 3.2 UI ........................................................................................................................................ 9 4 EEPlat Runtime Engine & Phone support ................................................................................... 11 5 EEPlat and Appforce Comparison ............................................................................................... 12 6 Conclusions .................................................................................................................................. 13 1 Introduction EEPlat is a metadata-driven multi-tenant architecture, and metadata, templates, javascript development and runtime platform. EEPlat has a leading model system, can deliver robust, reliable, flexible applications. Once configuration, PC and Mobile two interfaces can be generated according to the same metadata. EEPlat can be run on a variety of cloud computing platform (EC2, Google App Engine, Windows Azure, Heroku ,etc), while supporting the traditional computing environment. Metadata is the data about data. In EEPlat, data includes the business including business data, business logic, business processes and the user interface. Metadata can also be called model. EEPlat makes further abstraction base on metadata, called metamodel, further improving the application's flexibility and scalability.
MetaDataModel(Business Object,Service,Pane)
DataBusiness System Items(Business Data,Logic,UI)
Model about Model(Business Object MetaModel,Service MetaModel,Rule MetaModel, Pane MetaModel,Grid MetaModel,Menu MetaModel,etc)
For multi-tenant applications, the needs of different tenants are almost always different, each tenant requires customized their application is quite natural. If the multi-tenant application is statically compiled binary file, then meet the challenge of these multi-tenant requirements and other personalized is almost impossible. However, a multi-tenant applications must be in its functionality interfaces to meet the reasonable requirements of different tenants.
For these reasons, EEPlat application platform can be generated according to the different tenants of the definition of metadata corresponding to the application, rather than a compiled binary executable file. Model-driven development is EEPlat's core and foundation. Each tenant meta data, each tenant's business data, there is a clear separation. Between these are clearly the boundaries so that we can safely customize or modify the application of the tenants will not affect other tenants.
EEPlat can be divided into three parts: data storage, meta-model system, the runtime engine, the following figure is the overall architecture diagram: Common MetaData
Business System ,e.g. CRM、OA、HR、ERP、MIS
Tenant-Specific MetaData
Multitenant Data StorageData TablesTenant-Specific
ViewData Tables
Data Tables
Multi-Tenant Optimize Data Store&Query
Tenant-SpecificDeclare Domain ObjectTenant-Specific
MetaDataTenant-Specific
MetaData
Tenant-SpecificDeclare Domain ObjectTenant-Specific
Declarative Domain Object (Model)
EEPlat Metamodel System
CommonDeclarative Domain Object (Model)
EEPlat Runtime Engine
PC BrowseMobile
2 MultiTenat Data Storage Isolation techniques EEPlat data storage support in the following manner: Sparce Column, tenantId field isolation, each independent tenants database. 1,Sparce Column. Similar to Salesforce Appforce , primarily through a common table to store all custom, there are tenants' fields and a lot of unified data fields (such as 500).EEPlat tenants optimization of data partitioning technology by tenants. Different Appforce, EEPlat provide a two-layer meta-data abstraction, independent between business metadata and tenant storage metadata. And EEPlat provides a more powerful model system.